Evaluation of Kerrighed cluster operating system for the execution of Internet services. Robert P Guziolowski

Size: px
Start display at page:

Download "Evaluation of Kerrighed cluster operating system for the execution of Internet services. Robert P Guziolowski"

Transcription

1 Evaluation of Kerrighed cluster operating system for the execution of Internet services Robert P Guziolowski December 5, 2006

2 2

3 Contents 1 Introduction 5 2 Field of the research Background Clusters Cluster view Cluster OS general features Goal of the research Research conditions Kerrighed s capabilities 13 4 Testing environment Hardware Software Operating system Web server Mail server Tests Installing Kerrighed Web server Prerequisites Testing process and encountered problems Mail server Prerequisites Testing process and encountered problems Future work Future development and suggestions Capabilities checker

4 4 CONTENTS Capabilities requirements Capabilities preanalyser Corrections Conclusion 39

5 Chapter 1 Introduction Since the invention of computers, a lot of research has been done to increase their computing power. The advantages of this are obvious: less time spent on computing itself, and/or more exact results using more sophisticated computing models. Nevertheless, for some computations single computers were not enough. Thus, parallel supercomputers were invented. Parallel supercomputers provided very high computing power, especially when they were specialized in a specific operation type, i.e. vector supercomputers. Nevertheless, supercomputers, if they were specialized or general purpose, were, and still are, rather expensive. Thus, with the introduction of computer networks, an idea of connecting several number of usual computers and forcing them to work together appeared. As networking technology has become cheaper and faster, this architecture has become significantly more attractive. That is how clusters were born. Nowadays, computational power of clusters is comparable with the fastest supercomputers in the world (see: while the components are widely available and relatively cheap. Thus, everyone can build his own cluster and share it with other users. Increasing computational power is also simple simply adding more and more usual computers. There are several problems that arise: 1. how to manage such type of distributed environment each computer (node) of a cluster can work separately, using its own memory, disk(s), or processor(s), unaware of other nodes; thus, a special Operating System (OS), which would allow to manage all the available resources in the reasonable way, has to be used, 2. how to use this environment convieniently and efficently, both for the users and administrators, and, 5

6 6 CHAPTER 1. INTRODUCTION 3. is the cluster we are using really has higher computing power than the general purpose computer or a single-machine server, and if yes, how much higher. While answers for the first two questions are relatively simple varoius operating systems for clusters exist, providing different features and approaches to manage and use clusters, the answer for the third question cannot be so easily stated. More conditions have to be known, such as what kind of measure take into account (user time, system time, general time spent by the testing application in the system), is the testing application using all the available nodes or not, is the testing application using specific features of the OS, was it created intentionously for using a distributed environment or not, and many more. The goal of this internship is to evaluate one of the operating systems designed for clusters, Kerrighed, in the execution of Internet services. The evaluation is made using two widely available services: Web server and electronic mail server.

7 Chapter 2 Field of the research 2.1 Background Clusters Following Wikipedia 1 : A computer cluster is a group of loosely coupled computers ([in other words: nodes]) that work together closely so that in many respects it can be viewed as though it were a single computer. Clusters are commonly, but not always, connected through fast local area networks. Clusters are usually deployed to improve speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or reliability. With respect to fulfilled tasks, clusters can be divided into: High-availability clusters, implemented primarily for the purpose of improving the availability of services which the cluster provides. Load balancing clusters, implemented primarily for improved performance, they commonly include high-availability features as well. High-performance clusters, implemented primarily to provide increased performance by splitting a computational task across many different nodes in the cluster, and are most commonly used in scientific computing. 1 Wikipedia, the free encyclopedia ( 7

8 8 CHAPTER 2. FIELD OF THE RESEARCH Cluster view A cluster is a group of loosely coupled computers, each of which contains its own processor(s), RAM memory, hard-drive(s), etc. A programmer has to be aware of the structure of a cluster, like number of nodes or connections between them. This is in order to locate a node within a cluster which a thread or a process is running on (see: Figure 2.1). This makes application development complicated, and needs a special approach and tools, such as MPI [12], Linda [11]/Glenda [5], etc. user app. user app. user app.... (#1) (#2) (#n) OS OS... OS... node node node #1 #2 #n Figure 2.1: Cluster view. On the lowest level of the cluster separate nodes exist with independently working operating system. On the top of it, user applications are being run. Thus, these applications have to be aware of the cluster structure in order to communicate between separate threads running on different nodes. More desired is a view, where all of the resources of the nodes of the cluster appear to the user as one system. The Operating System with such a view is called a Single System Image Operating System (SSI OS). There are several SSI OS currently in development (Kerrighed, Mosix/OpenMosix, OpenSSI). The SSI OS hides the distributed nature of a cluster from the user it does not matter which node the user logs in to the cluster, where he saves his data, on which node his application is started. All of these tasks are handled by the operating system. The conclusion is that also applications run by the user are not aware of distributed environment (see: Figure 2.2). Therefore, all of the applications which were not developed for the distributed environments can be easily run on clusters, and possibly much faster. The question, which arises, is what kind of application can be run faster, and how much?

9 2.1. BACKGROUND 9 user application distrbuted distrbuted distrbuted... OS OS OS... OS OS OS... node node node #1 #2 #n Figure 2.2: Cluster SSI view. Comparing to Figure 2.1, additional software level is provided. It is middleware, which hides the distributed architecture of the cluster. Thus, user application can be run the same way as it would be run on a single machine. All of the distributed issues are moved from user application to middlewere Cluster OS general features Running an user application with the use of the approach presented on Figure 2.2 is very easy and convenient. Nevertheless, suppose we have an user logging service, which can run only on one, well-specified and well-known (to the users) node, allowing users to log in the cluster 2. Thus, it seems that for this type of applications a specialized operating system is not needed, or, in the other words, some of the characteristics provided by cluster operating system are not needed. Let us call these characteristics: features. What kind of featurese cluster operating system should provide to set/unset in order to run an usual application in a distributed environment? As the cluster consists of separate nodes, it is easy to deduct that multithreaded applications can be run on more than one node in the particular moment. Thus, we will concentrate now on this type of application. To visualise the need of specific features, let us suppose we have a following application (see: Figure: 2.3): 1. there is one master process which starts some additional processes or threads, 2. one of the newly started processes, communication process, listens for incoming connections on an inet socket, and 2 Obviously, it can later dispatch user sessions into another node in order to balance the load.

10 10 CHAPTER 2. FIELD OF THE RESEARCH 3. the rest of the processes, work processes, do some computation work, but are able to communicate between each other using Inter-Process Communication mechanisms, like named/unnamed pipes. This hypothetical application can run on a single machine without problems as all the needed mechanisms are present and well implemented. Nevertheless, in a cluster, a specialized solutions allowing i.e. threads to communicate between each other, are highly necessary. Assume, for the sake of example simplicity, having a cluster with 4 nodes, numbered from 1 to 4, one master thread, one communication thread, and two worker threads. Suppose we start this application on one of the nodes of the cluster, let say: node number 1. If none of the cluster operating system features are set, this application runs only on this node. Nevertheless, work and communication threads need more computation power than one node is able to provide. Thus, a feature allowing to create new processes or threads on a distant node is needed. Let us call it distant fork feature. Use of this feature represents yellow circle on Figure 2.3. master process communication thread work thread work thread Figure 2.3: Example application with explanation of needed middleware features (yellow circles stands for fork()-like operations, red one for socket communication, and blue ones for interprocess communication (i.e. pipes)).

11 2.1. BACKGROUND 11 After setting the above mentioned feature, new processes or threads are created on (supposely) separeted nodes. Assume, for the sake of simplicity, that communication thread starts on node number 2, and worker threads on nodes 3 and 4. Nevertheless, master process is still working on node 1. If any user or another application tries to connect to our hypothetical application by inet socket (see: communication thread above), it has to address it somehow. The most convenient way is to address it using the address of the node where master thread resides. Thus, another mechanism is needed, allowing to migrate inet sockets from one node to another (red circle on Figure 2.3). Let us call it inet migration feature. And the last feature needed by our hypothetical application cluster-wide inter-process communication (blue circles on Figure 2.3). The mechanism allowing inter-process communication between processes on the same node and or different nodes is needed. Let us call it ipc migration feature. Setting these 3 features mentioned above allows our application to run cluster-wide. The lack of any of them prevents the application from running cluster-wide: 1. without the distant fork feature application obviously will not run cluster-wide (none of newly created processes is allowed to start on distant node or nodes), 2. without the inet migration feature, all of the newly created processes has the possibility to start on distant node or nodes except the communication process (inet socket(s) cannot be migrated; connecting with the application would be impossible if the process would have started on distant node), and 3. without the ipc migration feature, only the communication process can be started on a different node, because the work processes would not be able to communicate between each other. Thus, not only setting or unsetting specific features is important, but also setting one feature can imply setting another one. The cluster operating system should allow users to set or unset these features in order to run their applications cluster-wide or not.

12 12 CHAPTER 2. FIELD OF THE RESEARCH 2.2 Goal of the research The goal of the research is to test execution characteristics of one of the Single System Image Operating Systems Kerrighed [8] using widely available software providing most popular Internet services: Web server and electronic mail server. Kerrighed is a system based on Linux system. Moreover, it is fully integrated with the Linux kernel, in order to provide cluster-wide mechanisms, such as cluster-wide processes and threads identification. It provides a set of features, which allow the user to run his applications cluster-wide without knowing the architecture of the cluster, or ingering into the application sources. Internet services work in the manner of client-server. Thus, the work which has to be done by the server can be easily divided into smaller parts, as all the incoming inquiries of the clients are coming independent. This implies fine granulation of work, which can afterwards be well divided among nodes of the cluster. 2.3 Research conditions In general we can divide cluster testing into the groups: testing with the use of specialized applications, and testing with the use of applications not prepared to run in a distributed environment. First kind of tests uses an application, which is somehow aware of the environment it works in. Thus, it can optimize use of existing connections in the cluster, specific features of middleware, etc. Usually, results obtained with this type of application are very fruitful application scales very good, communication between threads or processes runs smoothly, acceleration of computing is significant. In the latter case, results are more dependent on the quality of the operating system: how much transparent it is for the application, what kind of features it provides and how much useful they are. Thus, from the point of view of an end-user, the second type of tests provides more real-life measurement. During this research the second type of test was in the interest. The condition which were emphasised were, firstly, test the operating system using completely unchanged application, and then, secondly, allowing some modifications to the application, but only those which were available in the operating system no modification to the application which would have imposed recompilation of it were allowed.

13 Chapter 3 Kerrighed s capabilities One of the crucial Kerrighed characteristics is a possibility of managing single system image features, such as shared memory, migrable streams, distant forking, etc. These features are called capabilities, and can be set for each of the processes separately. Capabilities can be also inherited by a child process from the parent process. Capabilities are not a mechanism. They are just to state that a specified mechanism should be used for the process. They have no influence on the used mechanisms. Capabilities can be divided into 4 functional groups: Permitted. Permitted capabilities are the capabilities which are allowed to be set for a process. This group has the influence on a mechanism of setting effective and inheritable permitted capabilities for the process, protecting against setting a capability which the process is not allowed to use. Effective. Effective capabilities are the capabilities which can be used by the current process. They are a subset of the permitted capabilities. Inheritable permitted. Inheritable permitted capabilities are the capabilities which are allowed to be set for child processes of the current process. This group has the influence on a mechanism of setting inheritable effective capabilities for the process, protecting against setting a capability which the child processes are not allowed to use. Inheritable effective. Inheritable effective capabilities are the capabilities which can be used by the child processes of the current process. Moreover, child processes are created with this set of capabilities (see also below). This capabilities are a subset of inheritable permitted capabilities. 13

14 14 CHAPTER 3. KERRIGHED S CAPABILITIES The full list of the available capabilities can be obtained from the Kerrighed documentation [9]. For the interest of this research only 4 of them were used: DISTANT FORK This capability is used by the fork system call to decide if it should try forking the new process on a distant node. CAN MIGRATE This capability is used by the default scheduler to decide if it can migrate a process. USE INTRA CLUSTER KERSTREAMS This capability is used to decide if the created sockets should be created (if possible) as Kerrighed-aware sockets local to the cluster, with the possibility of migration within the cluster. USE WORLD VISIBLE KERSTREAMS This capability is used to decide if the created sockets should be created (if possible) as Kerrighed-aware sockets able to migrate within the cluster and communicate with the outside world. Another issue is a mechanism of inheriting capabilities by the newly created processes. Let us label permitted capabilities by P, effective by E, inheritable permitted by IP, and inheritable effective by IE. Denoting by capabilities of newly created process we can write: P = IP E = IE IP = IP IE = IE It means, that newly created process is created with the set of permitted capabilities equal to the set of inheritable permitted capabilities of its parent. Similarly for the effective and inheritable effective set. Inheritable capabilities of the child process are equal to inheritable capabilities of parent process (see: Figure 3.1). Example of use of the DISTANT FORK capability Suppose we have a program (see: Figure 3.2) with a master process (P1), which creates two child processes (P2 and P3), and each of those child processes creates another two child processes (P2 creates P4 and P5, while P3 creates P6 and P7).

15 15 parent process child process (newly created) P E IP IE P E IP IE Figure 3.1: Capabilities inheritance mechanism. P1 P2 P3 P4 P5 P6 P7 Figure 3.2: Structure of the example program. Suppose we also have a cluster of two nodes. Let us see what happens in three cases: distant forking capability is first set as an effective capability for the master process, then as an inheritable effective, and at last as both effective and inheritable effective. Case 1: DISTANT FORK as an effective capability When DISTANT FORK is set as an effective capability, master process is allowed to distant fork it s child processes. Thus, processes P2 and/or P3 may be created on a node different than master process P1 (see: Figure 3.3). Case 2: DISTANT FORK as an inheritable effective capability When DISTANT FORK is set as an inheritable effective capability, the master process is not allowed to distant fork it s child processes, but these child

16 16 CHAPTER 3. KERRIGHED S CAPABILITIES P1 P2 P3 P4 P5 P6 P7 Figure 3.3: Example program processes creation with effective distant fork capability. P1 P2 P3 P5 P7 P4 P6 Figure 3.4: Example program processes creation with inheritable effective distant fork capability. processes are allowed to do it. Thus, one or more of processes P4, P5, P6, and/or P7 may be created on a node different then master process P1 (see: Figure 3.4). Case 3: DISTANT FORK as an effective and inheritable effective capability When DISTANT FORK is set as an effective as well as inheritable effective capability, master process and it s child processes are allowed to distant fork. Thus, one, more, or even all processes P2, P3, P4, P5, P6, and/or P7 may be created on a node different then master process P1 (see: Figure 3.5).

17 17 P1 P2 P3 P5 P6 P4 P7 Figure 3.5: Example program processes creation with both effective and inheritable effective distant fork capability.

18 18 CHAPTER 3. KERRIGHED S CAPABILITIES

19 Chapter 4 Testing environment In this chapter a general description of the testing environment, both hardware and software, is provided. In the software part testing applications, which are web server, Apache [3], and mail server, Postfix [14], are also described. These applications have been chosen for two reasons: Work separation. Both applications give possibility of separating working threads from managing and communication threads. Scalability with work balancing. Both applications use threads or processes, which can be easily migrated to different nodes of the cluster in order to balance the work and use the available computing power more reasonably. 4.1 Hardware The cluster for tests consisted of 4 similar PC computers, each of which was equipped with Intel Pentium III 1 GHz processor, and 512 MB of RAM memory. All the nodes were connected by 100 Mbits Ethernet switch. 4.2 Software Operating system Each of the nodes was working under the control of Kerrighed version 1.0.2, based on Debian GNU/Linux operating system, version , kernel version , gcc version

20 20 CHAPTER 4. TESTING ENVIRONMENT Web server Apache as a web server The Apache web server [3] is an open source HTTP server for modern operating systems. The goal of the project is to provide secure, efficent and extensible web server synchronized to the current HTTP standards. Apache has a modular architecture and allows to use several threading/multiprocessing modes, which are also provided as modules. For the testing purposes Apache version was used. Thread models in Apache Apache server provides several multithreading/multiprocessing models, called Multi-Processing Modules (MPMs). These models are: prefork, worker, leader, threadpool, and perchild. A short description of each of the models with the possible use on the cluster is provided below. Prefork This MPM implements a non-threaded pre-forking web server. A single control process launches a defined number of child processes which listen for incoming connections and serve them when they appear. There are always several spare or idle listening processes ready to serve incoming requests. This assures clients do not have to wait for a request serving process to be created in order to serve their requests. While running Apache using prefork MPM within a cluster, single control process is located on a well-known node, while the rest of preforked processes can be distributed on all of the nodes of the cluster waiting for a request to serve. This MPM is default while using Apache under Unix-like operating systems. Worker This MPM implements a hybrid multi-process multithreaded server. A single control process launches a defined number of child processes. Each of the child processes is responsible for launching a fixed number of threads one of these threads becomes thread listening for the incoming connections, while the rest is ready to serve requests passed by the listening thread when they arrive. There are always several spare or idle pool of request-serving threads. This assures clients do not have to wait for a request serving thread to be created in order to serve their requests. While running Apache using worker MPM within a cluster, single control process is located on a well-known node. It creates child processes

21 4.2. SOFTWARE 21 on the same node or on different nodes, without possibility of migration. The child processes create the non-migratable listening thread on the same node as them, and migratable serving threads on the same node (in case they are not on the same node as the single control process), or on different nodes. Leader The experimental version of worker MPM, implemented with the use of leaders/followers design pattern [10] in order to coordiante work among request-serving threads. The main difference between this MPM and the worker MPM is that in the latter each of the created threads has its role (either listening or serving the requests), while the in leader MPM the role of the threads can change in the time (the leader thread is listening for the connections, and after receiving the request, is changing into worker thread, while the listening duty goes to another idle thread in the queue). Usage withing the cluster like in the worker MPM. This MPM is an experimental it may or may not work as expected. Threadpool This MPM is an experimental version of worker MPM (described above), which uses pool of threads for serving incoming requests. While running Apache using threadpool MPM within a cluster, pool of threads is created on the same node as the single control process, but with the possibility of migration. Thus, incoming connections can be served by the threads from the pool load balanced among all of the nodes of the cluster. This MPM is a developer playground and highly experimental it may or may not work as expected. Perchild This MPM implements a hybrid multi-process multi-threaded web server. A single control process launches a defined number of child processes. Each of the child processes is responsible for launching fixed number of threads, which listen to incoming connections and serve the requests. Fluctuactions in load are handled by increasing or decreasing number of threads in a separate child process. Usage of this MPM has not been studied within a cluster. This MPM is not functional development is not complete and is not currently active.

22 22 CHAPTER 4. TESTING ENVIRONMENT Mail server Postfix as a mail server Postfix [14] is an electronic mail delivery program alternative to widely-used Sendmail. It attempts to be fast, easy to administer, and secure. It is compatible with Sendmail interface, while the internal implementation is completely different. Postfix is provided as a set of commands and servers, multithreaded/multiprocessed or not. For the testing purposes Postfix version was used. Thread model in Postfix The multithreaded/multiprocessing model cannot be presented exactly because no exact document has been found on this subject. Below only hypothetical description is provided. spawn process spawned process master process. spawn process spawned process.. spawn process spawned process Figure 4.1: Part of the hypothetical Postfix threading/multiprocessing model. Green line shows processes created on-demand. At the startup of Postfix system, several (probably) multithreaded servers are started: master (managing other servers), pipe (providing messages pipe structure), etc. These servers are responsible for delivering incoming and

23 4.2. SOFTWARE 23 outgoing mail. Some other processes, like spawn processes, are started on demand by the master process (see Figure 4.1) and then run non-postfix command in separate processes. No more research on the threading model used by Postfix system was conducted.

24 24 CHAPTER 4. TESTING ENVIRONMENT

25 Chapter 5 Tests 5.1 Installing Kerrighed The installed version of Kerrighed was obtained from [8]. Installation was conducted without any problems. Nevertheless, small bugs discovered later caused using Kerrighed version available through local CVS server, which was more often updated. 5.2 Web server Prerequisites Installed version of Apache server was obtained form [3]. It was compiled with worker MPM (see Section 4.2.2) and installed in the cluster-wide shared directory. In order to test the scalability of Apache running on Kerrighed cluster, several load testing tools were obtained. Short characteristics of each of them is provided below. Summary of measured characteristics can be viewed in Table 5.1. Flood Flood [4] is an experimental HTTP load testing tool. The tests are based on a configuration file, which contains definition(s) of a worker. Each worker contains a list of addresses, which can be accessed in a random, round-robin, or sequenced (with cookie propagation) manner. Main measurements which are collected consists of TCP connect time (establishing a connection), time to send request, time until first response data is received, and time to receive a full response (for details consult Table 5.1). 25

26 26 CHAPTER 5. TESTS Usual output of Flood contains a lot of information, which can be presented in more readable format (with the use of a tool provided with Flood). Example output is shown below: Slowest pages on average (worst 5): Average times (sec) connect write read close hits URL Requests: 100 Time: 8.93 Req/Sec: Http load Http load [6] is a multithreaded HTTP test client, which runs in a single process. The tests are based on several parameters and a file containing urls to test. One of the crucial parameter is a start specifier: rate: starts specified number of connections every second, and parallel: keeps specified number of parallel fetches simultaneously. Main measurements which are collected consists of TCP connect time (establishing a connection), time until first response data is received, and average I/O rate per connection (for details consult Table 5.1). Usual output of Http load is human readable. Example output is shown below: 45 fetches, 6 max parallel, bytes, in seconds mean bytes/connection fetches/sec, bytes/sec msecs/connect: mean, max, min msecs/first-response: mean, max, min HTTP response codes: code Httperf Httperf [7] is a tool to measure web server performace. The tests are based on several parameters and one web address to test, and can be run in a request-oriented or session-oriented manner 1. The most important parameters are: num-conns total number of connections to create, 1 Later we talk only about request-oriented approach.

27 5.2. WEB SERVER 27 rate fixed rate (per second) at which connections are created, timeout amount of time to wait for a server reaction in case of establishing TCP connection, sending a request, waiting for reply, receiving a reply, and think-timeout maximum time the server may need to produce the reply for a given request (this value, when specified, is added to the timeout value described above). Main measurements which are collected consist of TCP connect time (establishing a connection), time until first response data is received, request and reply rate, and average I/O rate summed across all connections as well as CPU load (for details consult Table 5.1). Usual output from Httperf is human readable and relatively rich. Example output is shown below: Total: connections 100 requests 100 replies 100 test-duration s Connection rate: 4.9 conn/s (204.7 ms/conn, <=5 concurrent connections) Connection time [ms]: min avg max median stddev 54.6 Connection time [ms]: connect Connection length [replies/conn]: Request rate: 4.9 req/s (204.7 ms/req) Request size [B]: 65.0 Reply rate [replies/s]: min 4.4 avg 4.9 max 5.0 stddev 0.3 (4 samples) Reply time [ms]: response transfer Reply size [B]: header content footer 0.0 (total ) Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0 CPU time [s]: user 7.48 system (user 36.5% system 61.7% total 98.2%) Net I/O: 57.5 KB/s (0.5*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 Torture Torture [16] is a multithreaded script written in PERL [13]. The tests can be run in two modes: Fetching tests Concurrent fetches of a web address contents with the user-specified address, number of threads and number of fetches for each thread.

28 28 CHAPTER 5. TESTS Vulnerability tests Testing for vulnerability of static buffer overflow problems by sending a random length data to the server with the user-specified address, number of threads and random data limit. Main measurements which are collected consist of server response time (equivalent to reading data from a connection), and average I/O rate summed across all connections (for details consult Table 5.1). Usual output from Torture is human readable. Example output is shown below: Transactions: 25 Elapsed time: sec Bytes Transferred: bytes Response Time: 5.63 sec Transaction Rate: 0.89 trans/sec Throughput: bytes/sec Concurrency: 5.0 Status Code 200: 25 Measured operation Flood Http load Httperf Torture connection open time connection write time connection read time connection close time connection rate request rate I/O rate reply (transaction) rate connection life time CPU rate Table 5.1: Summary of characteristics measured by suggested testing tools Testing process and encountered problems Approach 1 correctness test Simple test consisted of running Apache server without using Kerrighed capabilities, thus only on one node. Web server was fully operational. Therefore, correctness test has ended successfully.

29 5.2. WEB SERVER 29 Approach 2 client outside the cluster Second approach consisted of running Apache fully on the cluster using prefork threading model (see Section for details) with the use of 4 preforked processes. Testing tools were placed on a different machine in order not to put load on the server CPU and other resources (such as socket descriptors). Nevertheless, two problems have arised, both concerning sockets. Firstly, running Apache with the capability USE WORLD VISI- BLE KERSTREAMS was not working as expected the capability was not yet fully implemented. Secondly, using the capability USE INTRA CLUS- TER KERSTREAMS caused the client testing tools not being able to connect to created sockets. Approach 3 client inside the cluster Mentioned error with connectivity from outside the cluster with the sockets created using the capability USE INTRA CLUSTER KERSTREAMS caused moving testing tools to one of the nodes of the cluster. In order to allow testing tools not to be interferred by other applications, as well as allowing Apache server to respond as good as possible, the node used by testing tools had to be excluded from global scheduling and distant process creation algorithms, but not from the cluster itself. Thus, several changes to the following source files were made: aragorn/scheduler.c aragorn/schedulers/cpu_scheduler2.c arch/asm-i386/fork.h After this changes several tests have been conducted proving, that one of the nodes is excluded from both scheduling and distant forking processes. Unfortunately, after starting Apache on 3 nodes only (with 3 preforked processes), and test tools on 4th node, establishing connection was still not possible. Testing tools reported: Connection refused error, and on the console of the node which connection establishment attempt was made on, following error occured: legolas_cancel_entry: todo Thus, further testing has been cancelled.

30 30 CHAPTER 5. TESTS Results achieved Testing Apache server running on Kerrighed cluster failed due to above mentioned errors, malfunctions and unimplemented features. Unfortunately, no solutions for these errors were suggested for the used kernel version. Suggestions and solutions Above mentioned errors were partially solved in a new version of Kerrighed, for kernel version 2.6. Due to ongoing work on the new version of Kerrighed, and only partial implementation of it, no further tests were conducted on the used Kerrighed version, nor changes of the Kerrighed version was done. 5.3 Mail server Prerequisites The install version of Postfix server was obtained from [14]. It was successfuly compiled and installed in the cluster-wide shared directory. The idea which caused using mailserver as another cluster testing tool was that running a mail server on only one node of the cluster would solve the problem of unimplemented/malfuctioning features in socket migration. As mail delivery is not highly computational problem, mails should have to be checked by antispam and/or antivirus software, which could be run parallely on the remaining nodes of cluster. For these reasons, two tools, SpamAssassin [15] and Anomy Sanitizer [1], were chosen. Short characteristics of these tools, as well as preparation of a filter script for Postfix server, are presented below. SpamAssassin SpamAssassin [15] is an antispam software. It tests mail content against predefined expressions, using increasing mark system. If an is marked above a specified level, it is considered as a spam. Anomy Sanitizer Anomy Sanitizer [1] is a mail sanitizing software. It checkes the mail contents in order to avoid and correct MIME exploits, web bugs, malicious HTML tags, and many others. Filter script Incorporating SpamAssasin and Anomy Sanitizer to work within Postfix server needs creating a special filtering script. During the tests, two filter scripts were written. More details about them can be found in Section

31 5.3. MAIL SERVER 31 Load generator For the testing purposes, a simplified load testing tool was developed in PERL [13] language. This load generator allows user to send a specified number of s from a specified file, containing receiver address and contents of the mail Testing process and encountered problems Approach 1 correctness test Simple test consisted of running Postifix server on the cluster without the use of Kerrighed capabilities. The mail server was fully operational. Approach 2 distributed test Another test was conducted, giving the environment DISTANT FORK and CAN MIGRATE capabilities. Unfortunately, separate threads/processes of Postfix server lost connection between them. It had impact on later filtering script architecture. Approach 3 simple content filtering Filtering script has been prepared according to the [2]. After manual testing the filtering script on several messages, Postfix configuration was changed in order to use it in the process of mail delivery. Unfortunately, two problems arised. Such a script prepared in a way described in the above document cannot be used for locally delivered mails 2. Furthermore, mails incoming from outside the cluster were not sent correctly to the cluster, according to more general configuration which could not be changed easily. Approach 4 advanced content filtering In order to filter locally delivered maili, the filtering script had to be changed. Its tasks were to read incoming data from incoming connection (inet socket or ICP pipe), directing this data to filters (SpamAssassin and Anomy Sanitizer), and resending them to Postfix. Postfix configuration has been changed in order to allow following actions: 1. the incoming mail was marked as unfiltered, 2. only unfiltered mails were directed to the filtering script, 2 Origination and destination hosts are the same.

32 32 CHAPTER 5. TESTS 3. the filtering script was run outside Postfix environment as a separate process launched by spawn processes, and 4. after filter, the mail was marked as filtered and could be delivered to the destination mailbox. According to the spawn process documentation, the connection between the Postfix server and a launched process is realised with the use of inet sockets or IPC pipes, which are connected to STD IN (standard input) and STD OUT (standard output) of the launched process. Thus, not only DISTANT FORK and CAN MIGRATE (for the balance loading) capabilities are required by the filtering script process, but also USE INTRA CLUSTER KERSTREAMS capability. Unfortunately, these capabilities cannot be given to the filter in an easy way. Firstly, Postfix server cannot be run with DISTANT FORK and CAN MIGRATE capabilities (see Approach 2 above). Secondly, the spawn process does not parse a given command parameter. Therefore, capabilities cannot be provided in the command parameter passed to a spawn process as well. Due to the problems mentioned above, a two-step filtering script was prepared in order to run the filtering script in a cluster-wide manner. spawn process Postfix master node script #1 script #2 +DF +CM +INTRA DF CM distant node Spam Assassin Anomy Sanitizer Figure 5.1: Two-step filtering script architecture. As showed in Figure 4.1 on page 22, spawn processes are created within a Postfix server on demand. Suppose that Postfix server is running on master node. Thus, a spawn process will also be created on the same node, because Postfix cannot be started with the DISTANT FORK or CAN MIGRATE capabilities (as mentioned above). Therefore, also the

33 5.3. MAIL SERVER 33 spawn process will not be able to start the filtering script on a distant node, what is shown in Figure 5.1. Therefore, the main task of a script #1 is to prepare an environment for starting filtering script #2 on a distant node (shown as setting 3 capabilities on: DISTANT FORK, CAN MIGRATE, and USE INTRA CLUSTER KERSTREAMS; see also discussion below). Script #2 is responsible for retrieving some capabilities inherited from its parent (see also discussion below) and of launching both SpamAssassin and Anomy Sanitizer on the distant node. The last decision to be made is which capabilities set and which capabilities unset in both of the filtering scripts. It is obvious that both scripts have to be aware of USE INTRA CLUSTER KERSTREAMS, not only at effective level, but also on the inheritable effective for any possible child processes of used antispam and antivirus tools. Thus, filter script #1 sets this capability on and filter script #2 does not unset it. A different approach is needed for CAN MIGRATE and DISTANT FORK. In the first script these two capabilities has to be set at least at effective level, in order to allow the second script be started remotely. Inheritable effective level can also be set in order to allow child processes of filter script #2 to be started distantly and to allow them to be considered while load balancing and general scheduling processes. Nevertheless, effective level capabilities have to be removed from filter script #2 at the beginning of his running in order to avoid unnecessary migration of this process. For this approach child processes of filter script #2 are not allowed to migrate or to distant fork. Exceptionally for this approach testing cluster has been limited only to two nodes, in order to make debugging easier. After manual testing the two-step filtering script on several messages, Postfix configuration was changed in order to use it in the process of local mail delivery. Unfortunately, for some reason filter script #2 was not started distantly. Approach 5 unavailability of capabilities After the problem which arised at the end of previous approach, an investigation was made for the reason of such behaviour. The answer is that even if a process is given a capability, for some reasons this capability can be unavailable for using. Parts of Linux kernel code are guarded by statements which decide if the given capability cannot be used (the capability becomes unavailable) or used again (capability becomes available). Availability of a capability is not a simple flag, but in fact a counter value equal 0 means that capability is available, greater than 0 unavailable.

34 34 CHAPTER 5. TESTS Assuming no errors were made during the test plans, two solutions emmerged: 1. Knowing the guard conditions and their places in the kernel code, manage them differently, allowing specified process not to lose availability of its capabilities. For this reason several files of the kernel source code were changed: fs/fifo.c fs/pipe.c net/socket.c This solution solved the problem of availability of needed capabilities, i.e. DISTANT FORK and CAN MIGRATE. The processes were able both to migrate and distant fork. Nevertheless, during the attempt of distant fork and/or migrate, flow control encountered part of code where one of the necessary arguments 3 was NULL and caused killing the process. 2. Dirty solution consisting of the attempt of forced distant fork hardcoding into kernel code name of the process to be distant fork and distant forking it not checking availability of the needed capabilities. For this solution a file: kernel/fork.c was changed. Unfortunately, this solution also failed. During the distant fork attempt the parent node reported: Can t migrate a non-krg socket even if appropriate capabilities are set, and destination node crashed. Results achieved Testing Postfix server running on Kerrighed cluster failed due to above mentioned errors and malfunctions. Unfortunately, no solutions for these errors were suggested (see also below). 3 Pointer to a Kerrighed socket structure.

35 5.3. MAIL SERVER 35 Suggestions and solutions Two suggestions are worth mentioning: 1. In approach 4 the master node should be excluded from the distant fork and migration algorithms by providing appropriate changes into following files: aragorn/scheduler.c aragorn/schedulers/cpu_scheduler2.c arch/asm-i386/fork.h 2. Error reported by one of the nodes in approach 5 may suggest, that Postfix does not create Kerrighed-aware sockets, even if appropriate capabilities were set.

36 36 CHAPTER 5. TESTS

37 Chapter 6 Future work 6.1 Future development and suggestions Capabilities checker One of very useful features, which is currently not implemented in Kerrighed, would be constant availability of information concerning capabilities possesed by a process or thread. Let us call it capabilities checker. After setting or unsetting one or more capabilities for a process, the user has no information about them. While capabilities should rather not be lost by the process or thread during the execution, some of them can become unavailable. Unfortunately, the user has no information on which of the set capabilities are currently unavailable, when they become unavailable and/or available and for what reason. This information is currently not only unavailable during the execution time, but also is not logged for later review. Capabilities checker could be provided as an extra capability in order to log mentioned information, or as a separate tool in order to view current state of capabilities of a given process during the execution time Capabilities requirements Another useful feature would be providing user with information of some capabilities requirements. As an example let us examine the CAN MIGRATE capability. Even when setting this capability on, the process will not be able to migrate, if it is not given a DISTANT FORK capability. Unfortunately, Kerrighed does not inform about it anyway. 37

38 38 CHAPTER 6. FUTURE WORK Capabilities preanalyser Apart from capabilities requirement mentioned in section 6.1.2, a more sophisticated tool, capabilities preanalyser, would be appreciated. The task of this tool would be to scan source code (if available) and/or to monitor (possibly in several iterations) execution of an application which is not cluster-aware in orded to suggest the most desirable set of capabilities needed by this application to take advantage of all of the cluster characteristics. 6.2 Corrections Unfortunately, some parts of the Kerrighed are still not implemented. One of this part is USE WORLD VISIBLE KERSTREAMS capability, which would be very useful for the servers like Apache or Postfix. Also the capability responsible for internal socket communication does not work completely as it was expected. These parts should be corrected. Another correction, followed by a more detailed investigation, related to the use of Kerrighed-aware sockets, which for some reason cannot be properly instantiated by some of the software (see Postfix tests). Unfortunately, there is no easy way to check if the created sockets are Kerrighed-aware or not.

39 Chapter 7 Conclusion Simple testing applications available in the distribution of Kerrighed, scaled very good on the cluster. When it came to run widely-available software, such like Apache or Postfix server, some mechanisms were partially unavailable or not working properly due to unimplementation or other malfunctions. These mechanisms concerned mostly stream communication: sockets and IPC pipes. In some cases sockets were not properly created which resulted in impossibility of using them, in other cases sockets or pipes, even if created with proper capabilities, forbade processes to migrate or distant fork. Summarizing, Kerrighed is prepared for multithreaded/multiprocessed application which has not been written for the distributed environments, as long as they do not use several mechanisms, such as world-visible stream communication. Moreover, running such an application sometimes requires knowledge about internal architecture of the cluster as well as detailed application flow control and used resources and mechanisms. 39

40 40 CHAPTER 7. CONCLUSION

41 Bibliography [1] Anomy Sanitizer, [2] Anomy Sanitizer s filtering script, README.html [3] Apache, [4] Flood, [5] Glenda, [6] Http load, [7] Httperf, [8] Kerrighed, [9] Kerrighed s User Manual, pdf [10] Leaders/followers design pattern, pspdfs/lf.pdf [11] Linda, [12] MPI, Message Passing Interface, [13] PERL, Practical Extraction and Report Language, com/ [14] Postfix, [15] SpamAssassin, [16] Torture, 41

Kerrighed: use cases. Cyril Brulebois. Kerrighed. Kerlabs

Kerrighed: use cases. Cyril Brulebois. Kerrighed. Kerlabs Kerrighed: use cases Cyril Brulebois cyril.brulebois@kerlabs.com Kerrighed http://www.kerrighed.org/ Kerlabs http://www.kerlabs.com/ 1 / 23 Introducing Kerrighed What s Kerrighed? Single-System Image (SSI)

More information

Performance Evaluation of Shared Hosting Security Methods

Performance Evaluation of Shared Hosting Security Methods Performance Evaluation of Shared Hosting Security Methods Seyed Ali Mirheidari, Sajjad Arshad, Saeidreza Khoshkdahan Computer Engineering Department, Sharif University of Technology, International Campus,

More information

Web Server Software Architectures

Web Server Software Architectures Web Server Software Architectures Author: Daniel A. Menascé Presenter: Noshaba Bakht Web Site performance and scalability 1.workload characteristics. 2.security mechanisms. 3. Web cluster architectures.

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Performance Tuning Guide for ECM 2.0

Performance Tuning Guide for ECM 2.0 Performance Tuning Guide for ECM 2.0 Rev: 20 December 2012 Sitecore ECM 2.0 Performance Tuning Guide for ECM 2.0 A developer's guide to optimizing the performance of Sitecore ECM The information contained

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

A Comparative Study on Vega-HTTP & Popular Open-source Web-servers

A Comparative Study on Vega-HTTP & Popular Open-source Web-servers A Comparative Study on Vega-HTTP & Popular Open-source Web-servers Happiest People. Happiest Customers Contents Abstract... 3 Introduction... 3 Performance Comparison... 4 Architecture... 5 Diagram...

More information

High Performance Cluster Support for NLB on Window

High Performance Cluster Support for NLB on Window High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor,

More information

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest 1. Introduction Few years ago, parallel computers could

More information

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter

More information

Design Issues in a Bare PC Web Server

Design Issues in a Bare PC Web Server Design Issues in a Bare PC Web Server Long He, Ramesh K. Karne, Alexander L. Wijesinha, Sandeep Girumala, and Gholam H. Khaksari Department of Computer & Information Sciences, Towson University, 78 York

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE

Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE A P P L I C A T I O N V E R S I O N : 8. 0 Dear User! Thank you for choosing our product. We hope that this documentation will help you in your

More information

Web Development. Owen Sacco. ICS2205/ICS2230 Web Intelligence

Web Development. Owen Sacco. ICS2205/ICS2230 Web Intelligence Web Development Owen Sacco ICS2205/ICS2230 Web Intelligence 2. Web Servers Introduction Web content lives on Web servers Web servers speak the platform independent HyperText Transfer Protocol (HTTP) (so

More information

LCMON Network Traffic Analysis

LCMON Network Traffic Analysis LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia adamblack@swin.edu.au Abstract The Swinburne

More information

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University

More information

Scalability Factors of JMeter In Performance Testing Projects

Scalability Factors of JMeter In Performance Testing Projects Scalability Factors of JMeter In Performance Testing Projects Title Scalability Factors for JMeter In Performance Testing Projects Conference STEP-IN Conference Performance Testing 2008, PUNE Author(s)

More information

Distributed Operating Systems. Cluster Systems

Distributed Operating Systems. Cluster Systems Distributed Operating Systems Cluster Systems Ewa Niewiadomska-Szynkiewicz ens@ia.pw.edu.pl Institute of Control and Computation Engineering Warsaw University of Technology E&IT Department, WUT 1 1. Cluster

More information

W3Perl A free logfile analyzer

W3Perl A free logfile analyzer W3Perl A free logfile analyzer Features Works on Unix / Windows / Mac View last entries based on Perl scripts Web / FTP / Squid / Email servers Session tracking Others log format can be added easily Detailed

More information

Web Application s Performance Testing

Web Application s Performance Testing Web Application s Performance Testing B. Election Reddy (07305054) Guided by N. L. Sarda April 13, 2008 1 Contents 1 Introduction 4 2 Objectives 4 3 Performance Indicators 5 4 Types of Performance Testing

More information

ESET Mail Security 4. User Guide. for Microsoft Exchange Server. Microsoft Windows 2000 / 2003 / 2008

ESET Mail Security 4. User Guide. for Microsoft Exchange Server. Microsoft Windows 2000 / 2003 / 2008 ESET Mail Security 4 for Microsoft Exchange Server User Guide Microsoft Windows 2000 / 2003 / 2008 Content 1. Introduction...4 1.1 System requirements... 4 1.2 Methods Used... 4 1.2.1 Mailbox scanning

More information

Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005

Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005 Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005 Compute Cluster Server Lab 3: Debugging the parallel MPI programs in Microsoft Visual Studio 2005... 1

More information

AXIGEN Mail Server. Quick Installation and Configuration Guide. Product version: 6.1 Document version: 1.0

AXIGEN Mail Server. Quick Installation and Configuration Guide. Product version: 6.1 Document version: 1.0 AXIGEN Mail Server Quick Installation and Configuration Guide Product version: 6.1 Document version: 1.0 Last Updated on: May 28, 2008 Chapter 1: Introduction... 3 Welcome... 3 Purpose of this document...

More information

Optimization of Cluster Web Server Scheduling from Site Access Statistics

Optimization of Cluster Web Server Scheduling from Site Access Statistics Optimization of Cluster Web Server Scheduling from Site Access Statistics Nartpong Ampornaramveth, Surasak Sanguanpong Faculty of Computer Engineering, Kasetsart University, Bangkhen Bangkok, Thailand

More information

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts

More information

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02) Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we

More information

Improved metrics collection and correlation for the CERN cloud storage test framework

Improved metrics collection and correlation for the CERN cloud storage test framework Improved metrics collection and correlation for the CERN cloud storage test framework September 2013 Author: Carolina Lindqvist Supervisors: Maitane Zotes Seppo Heikkila CERN openlab Summer Student Report

More information

Communication Protocol

Communication Protocol Analysis of the NXT Bluetooth Communication Protocol By Sivan Toledo September 2006 The NXT supports Bluetooth communication between a program running on the NXT and a program running on some other Bluetooth

More information

Welcome to Apache the number one Web server in

Welcome to Apache the number one Web server in Apache: The Number One Web Server Welcome to Apache the number one Web server in the world. If you are toying with the idea of running Apache, you are in the right place! This chapter introduces the Apache

More information

Filtering Mail with Milter. David F. Skoll Roaring Penguin Software Inc.

Filtering Mail with Milter. David F. Skoll Roaring Penguin Software Inc. Filtering Mail with Milter David F. Skoll Roaring Penguin Software Inc. Why filter mail? Overview Different filtering approaches Delivery agent (e.g. Procmail) Central filtering (Milter) Milter Architecture

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

How To Configure Apa Web Server For High Performance

How To Configure Apa Web Server For High Performance DEPLOYMENT GUIDE Version 1.0 Deploying F5 with Apache Web Servers Table of Contents Table of Contents Deploying the BIG-IP LTM with the Apache web server Prerequisites and configuration notes... 1 Product

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: sashko.ristov@finki.ukim.mk

More information

MOSIX: High performance Linux farm

MOSIX: High performance Linux farm MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm

More information

Measured Performance of an Information System

Measured Performance of an Information System MEB 2009 7 th International Conference on Management, Enterprise and Benchmarking June 5 6, 2009 Budapest, Hungary Measured Performance of an Information System Szikora Péter Budapest Tech, Hungary szikora.peter@kgk.bmf.hu

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Sage ERP Accpac Online

Sage ERP Accpac Online Sage ERP Accpac Online Mac Resource Guide Thank you for choosing Sage ERP Accpac Online. This Resource Guide will provide important information and instructions on how you can get started using your Mac

More information

PRODUCTIVITY ESTIMATION OF UNIX OPERATING SYSTEM

PRODUCTIVITY ESTIMATION OF UNIX OPERATING SYSTEM Computer Modelling & New Technologies, 2002, Volume 6, No.1, 62-68 Transport and Telecommunication Institute, Lomonosov Str.1, Riga, LV-1019, Latvia STATISTICS AND RELIABILITY PRODUCTIVITY ESTIMATION OF

More information

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7 Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

Web Server (Step 1) Processes request and sends query to SQL server via ADO/OLEDB. Web Server (Step 2) Creates HTML page dynamically from record set

Web Server (Step 1) Processes request and sends query to SQL server via ADO/OLEDB. Web Server (Step 2) Creates HTML page dynamically from record set Dawn CF Performance Considerations Dawn CF key processes Request (http) Web Server (Step 1) Processes request and sends query to SQL server via ADO/OLEDB. Query (SQL) SQL Server Queries Database & returns

More information

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture Review from last time CS 537 Lecture 3 OS Structure What HW structures are used by the OS? What is a system call? Michael Swift Remzi Arpaci-Dussea, Michael Swift 1 Remzi Arpaci-Dussea, Michael Swift 2

More information

Chapter 2: OS Overview

Chapter 2: OS Overview Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:

More information

HAProxy. Free, Fast High Availability and Load Balancing. Adam Thornton 10 September 2014

HAProxy. Free, Fast High Availability and Load Balancing. Adam Thornton 10 September 2014 HAProxy Free, Fast High Availability and Load Balancing Adam Thornton 10 September 2014 What? HAProxy is a proxy for Layer 4 (TCP) or Layer 7 (HTTP) traffic GPLv2 http://www.haproxy.org Disclaimer: I don't

More information

Performing Load Capacity Test for Web Applications

Performing Load Capacity Test for Web Applications International Journal of Innovation and Scientific Research ISSN 2351-8014 Vol. 17 No. 1 Aug. 2015, pp. 51-68 2015 Innovative Space of Scientific Research Journals http://www.ijisr.issr-journals.org/ Performing

More information

Benchmarking FreeBSD. Ivan Voras <ivoras@freebsd.org>

Benchmarking FreeBSD. Ivan Voras <ivoras@freebsd.org> Benchmarking FreeBSD Ivan Voras What and why? Everyone likes a nice benchmark graph :) And it's nice to keep track of these things The previous major run comparing FreeBSD to Linux

More information

One Server Per City: C Using TCP for Very Large SIP Servers. Kumiko Ono Henning Schulzrinne {kumiko, hgs}@cs.columbia.edu

One Server Per City: C Using TCP for Very Large SIP Servers. Kumiko Ono Henning Schulzrinne {kumiko, hgs}@cs.columbia.edu One Server Per City: C Using TCP for Very Large SIP Servers Kumiko Ono Henning Schulzrinne {kumiko, hgs}@cs.columbia.edu Goal Answer the following question: How does using TCP affect the scalability and

More information

Contributions to Gang Scheduling

Contributions to Gang Scheduling CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,

More information

co Characterizing and Tracing Packet Floods Using Cisco R

co Characterizing and Tracing Packet Floods Using Cisco R co Characterizing and Tracing Packet Floods Using Cisco R Table of Contents Characterizing and Tracing Packet Floods Using Cisco Routers...1 Introduction...1 Before You Begin...1 Conventions...1 Prerequisites...1

More information

Ekran System Help File

Ekran System Help File Ekran System Help File Table of Contents About... 9 What s New... 10 System Requirements... 11 Updating Ekran to version 4.1... 13 Program Structure... 14 Getting Started... 15 Deployment Process... 15

More information

DMS Performance Tuning Guide for SQL Server

DMS Performance Tuning Guide for SQL Server DMS Performance Tuning Guide for SQL Server Rev: February 13, 2014 Sitecore CMS 6.5 DMS Performance Tuning Guide for SQL Server A system administrator's guide to optimizing the performance of Sitecore

More information

DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service

DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service Achieving Scalability and High Availability Abstract DB2 Connect Enterprise Edition for Windows NT provides fast and robust connectivity

More information

SIDN Server Measurements

SIDN Server Measurements SIDN Server Measurements Yuri Schaeffer 1, NLnet Labs NLnet Labs document 2010-003 July 19, 2010 1 Introduction For future capacity planning SIDN would like to have an insight on the required resources

More information

Whitepaper: performance of SqlBulkCopy

Whitepaper: performance of SqlBulkCopy We SOLVE COMPLEX PROBLEMS of DATA MODELING and DEVELOP TOOLS and solutions to let business perform best through data analysis Whitepaper: performance of SqlBulkCopy This whitepaper provides an analysis

More information

Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007

Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007 Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer

More information

Linux Distributed Security Module 1

Linux Distributed Security Module 1 Linux Distributed Security Module 1 By Miroslaw Zakrzewski and Ibrahim Haddad This article describes the implementation of Mandatory Access Control through a Linux kernel module that is targeted for Linux

More information

Sage 300 ERP Online. Mac Resource Guide. (Formerly Sage ERP Accpac Online) Updated June 1, 2012. Page 1

Sage 300 ERP Online. Mac Resource Guide. (Formerly Sage ERP Accpac Online) Updated June 1, 2012. Page 1 Sage 300 ERP Online (Formerly Sage ERP Accpac Online) Mac Resource Guide Updated June 1, 2012 Page 1 Table of Contents 1.0 Introduction... 3 2.0 Getting Started with Sage 300 ERP Online using a Mac....

More information

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server) Scalability Results Select the right hardware configuration for your organization to optimize performance Table of Contents Introduction... 1 Scalability... 2 Definition... 2 CPU and Memory Usage... 2

More information

Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer

Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer Version 2.0.0 Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003 Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment

More information

Apache Tomcat. Load-balancing and Clustering. Mark Thomas, 20 November 2014. 2014 Pivotal Software, Inc. All rights reserved.

Apache Tomcat. Load-balancing and Clustering. Mark Thomas, 20 November 2014. 2014 Pivotal Software, Inc. All rights reserved. 2 Apache Tomcat Load-balancing and Clustering Mark Thomas, 20 November 2014 Introduction Apache Tomcat committer since December 2003 markt@apache.org Tomcat 8 release manager Member of the Servlet, WebSocket

More information

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip Load testing with WAPT: Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. A brief insight is provided

More information

Special Edition for Loadbalancer.org GmbH

Special Edition for Loadbalancer.org GmbH IT-ADMINISTRATOR.COM 09/2013 The magazine for professional system and network administration Special Edition for Loadbalancer.org GmbH Under Test Loadbalancer.org Enterprise VA 7.5 Load Balancing Under

More information

Software Tracing of Embedded Linux Systems using LTTng and Tracealyzer. Dr. Johan Kraft, Percepio AB

Software Tracing of Embedded Linux Systems using LTTng and Tracealyzer. Dr. Johan Kraft, Percepio AB Software Tracing of Embedded Linux Systems using LTTng and Tracealyzer Dr. Johan Kraft, Percepio AB Debugging embedded software can be a challenging, time-consuming and unpredictable factor in development

More information

MAGENTO HOSTING Progressive Server Performance Improvements

MAGENTO HOSTING Progressive Server Performance Improvements MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 sales@simplehelix.com 1.866.963.0424 www.simplehelix.com 2 Table of Contents

More information

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult

More information

Scalable Linux Clusters with LVS

Scalable Linux Clusters with LVS Scalable Linux Clusters with LVS Considerations and Implementation, Part II Eric Searcy Tag1 Consulting, Inc. emsearcy@tag1consulting.com May 2008 Abstract Whether you are perusing mailing lists or reading

More information

Learning GlassFish for Tomcat Users

Learning GlassFish for Tomcat Users Learning GlassFish for Tomcat Users White Paper February 2009 Abstract There is a direct connection between the Web container technology used by developers and the performance and agility of applications.

More information

Multi-Channel Clustered Web Application Servers

Multi-Channel Clustered Web Application Servers THE AMERICAN UNIVERSITY IN CAIRO SCHOOL OF SCIENCES AND ENGINEERING Multi-Channel Clustered Web Application Servers A Masters Thesis Department of Computer Science and Engineering Status Report Seminar

More information

JoramMQ, a distributed MQTT broker for the Internet of Things

JoramMQ, a distributed MQTT broker for the Internet of Things JoramMQ, a distributed broker for the Internet of Things White paper and performance evaluation v1.2 September 214 mqtt.jorammq.com www.scalagent.com 1 1 Overview Message Queue Telemetry Transport () is

More information

Front-End Performance Testing and Optimization

Front-End Performance Testing and Optimization Front-End Performance Testing and Optimization Abstract Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client

More information

virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06

virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 SWsoft Virtuozzo 3.5.1 (for Windows) Review 2 Summary 0. Introduction 1. Installation 2. VPSs creation and modification

More information

Kerrighed / XtreemOS cluster flavour

Kerrighed / XtreemOS cluster flavour Kerrighed / XtreemOS cluster flavour Jean Parpaillon Reisensburg Castle Günzburg, Germany July 5-9, 2010 July 6th, 2010 Kerrighed - XtreemOS cluster flavour 1 Summary Kerlabs Context Kerrighed Project

More information

PARALLELS SERVER BARE METAL 5.0 README

PARALLELS SERVER BARE METAL 5.0 README PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal

More information

Project Report on Implementation and Testing of an HTTP/1.0 Webserver

Project Report on Implementation and Testing of an HTTP/1.0 Webserver Project Report on Implementation and Testing of an HTTP/1.0 Webserver Christian Fritsch, Krister Helbing, Fabian Rakebrandt, Tobias Staub Practical Course Telematics Teaching Assistant: Ingo Juchem Instructor:

More information

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology 3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related

More information

Star System. 2004 Deitel & Associates, Inc. All rights reserved.

Star System. 2004 Deitel & Associates, Inc. All rights reserved. Star System Apple Macintosh 1984 First commercial OS GUI Chapter 1 Introduction to Operating Systems Outline 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 Introduction What Is an Operating System?

More information

Solution Guide Parallels Virtualization for Linux

Solution Guide Parallels Virtualization for Linux Solution Guide Parallels Virtualization for Linux Overview Created in 1991, Linux was designed to be UNIX-compatible software that was composed entirely of open source or free software components. Linux

More information

Web Server Architectures

Web Server Architectures Web Server Architectures CS 4244: Internet Programming Dr. Eli Tilevich Based on Flash: An Efficient and Portable Web Server, Vivek S. Pai, Peter Druschel, and Willy Zwaenepoel, 1999 Annual Usenix Technical

More information

theguard! ApplicationManager System Windows Data Collector

theguard! ApplicationManager System Windows Data Collector theguard! ApplicationManager System Windows Data Collector Status: 10/9/2008 Introduction... 3 The Performance Features of the ApplicationManager Data Collector for Microsoft Windows Server... 3 Overview

More information

Efficiency of Web Based SAX XML Distributed Processing

Efficiency of Web Based SAX XML Distributed Processing Efficiency of Web Based SAX XML Distributed Processing R. Eggen Computer and Information Sciences Department University of North Florida Jacksonville, FL, USA A. Basic Computer and Information Sciences

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information

Kaspersky Security 8.0 for Microsoft Exchange Servers Installation Guide

Kaspersky Security 8.0 for Microsoft Exchange Servers Installation Guide Kaspersky Security 8.0 for Microsoft Exchange Servers Installation Guide APPLICATION VERSION: 8.0 MAINTENANCE RELEASE 2 CRITICAL FIX 1 Dear User! Thank you for choosing our product. We hope that this document

More information

Summer Student Project Report

Summer Student Project Report Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months

More information

Check Point FireWall-1 HTTP Security Server performance tuning

Check Point FireWall-1 HTTP Security Server performance tuning PROFESSIONAL SECURITY SYSTEMS Check Point FireWall-1 HTTP Security Server performance tuning by Mariusz Stawowski CCSA/CCSE (4.1x, NG) Check Point FireWall-1 security system has been designed as a means

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

Linux Driver Devices. Why, When, Which, How?

Linux Driver Devices. Why, When, Which, How? Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may

More information

Parallels Plesk Panel

Parallels Plesk Panel Parallels Plesk Panel Copyright Notice Parallels Holdings, Ltd. c/o Parallels International GMbH Vordergasse 49 CH8200 Schaffhausen Switzerland Phone: +41 526320 411 Fax: +41 52672 2010 Copyright 1999-2011

More information

RevoScaleR Speed and Scalability

RevoScaleR Speed and Scalability EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution

More information

Chapter 2: Remote Procedure Call (RPC)

Chapter 2: Remote Procedure Call (RPC) Chapter 2: Remote Procedure Call (RPC) Gustavo Alonso Computer Science Department Swiss Federal Institute of Technology (ETHZ) alonso@inf.ethz.ch http://www.iks.inf.ethz.ch/ Contents - Chapter 2 - RPC

More information

HP Device Manager 4.6

HP Device Manager 4.6 Technical white paper HP Device Manager 4.6 Installation and Update Guide Table of contents Overview... 3 HPDM Server preparation... 3 FTP server configuration... 3 Windows Firewall settings... 3 Firewall

More information

Igor Seletskiy. CEO, CloudLinux

Igor Seletskiy. CEO, CloudLinux Optimizing PHP settings for Shared Hosting March M h 21 21, 212 Igor Seletskiy CEO, CloudLinux Type Security Performance Stability bl mod_php Scary Excellent Bad mod_php + mod_ruid2 Questionable Excellent

More information

WebEx. Remote Support. User s Guide

WebEx. Remote Support. User s Guide WebEx Remote Support User s Guide Version 6.5 Copyright WebEx Communications, Inc. reserves the right to make changes in the information contained in this publication without prior notice. The reader should

More information

The Benefits of Verio Virtual Private Servers (VPS) Verio Virtual Private Server (VPS) CONTENTS

The Benefits of Verio Virtual Private Servers (VPS) Verio Virtual Private Server (VPS) CONTENTS Performance, Verio FreeBSD Virtual Control, Private Server and (VPS) Security: v3 CONTENTS Why outsource hosting?... 1 Some alternative approaches... 2 Linux VPS and FreeBSD VPS overview... 3 Verio VPS

More information

Chapter 2: Getting Started

Chapter 2: Getting Started Chapter 2: Getting Started Once Partek Flow is installed, Chapter 2 will take the user to the next stage and describes the user interface and, of note, defines a number of terms required to understand

More information

Guideline for stresstest Page 1 of 6. Stress test

Guideline for stresstest Page 1 of 6. Stress test Guideline for stresstest Page 1 of 6 Stress test Objective: Show unacceptable problems with high parallel load. Crash, wrong processing, slow processing. Test Procedure: Run test cases with maximum number

More information

Evaluating and Comparing the Impact of Software Faults on Web Servers

Evaluating and Comparing the Impact of Software Faults on Web Servers Evaluating and Comparing the Impact of Software Faults on Web Servers April 2010, João Durães, Henrique Madeira CISUC, Department of Informatics Engineering University of Coimbra {naaliel, jduraes, henrique}@dei.uc.pt

More information

Fast, flexible & efficient email delivery software

Fast, flexible & efficient email delivery software by Fast, flexible & efficient email delivery software Built on top of industry-standard AMQP message broker. Send millions of emails per hour. Why MailerQ? No Cloud Fast Flexible Many email solutions require

More information