COMPARISON OF VIRTUAL MACHINES USING DIFFERENT VIRTUALIZATION TECHNOLOGIES ON LINUX PLATFORM MOHD KHAIRIL BIN MOHD APPANDI 2004219861 BACHELOR OF SCIENCE (HONS.) IN DATA COMMUNICATIONS AND NETWORKING A PROJECT PAPER SUBMITTED TO FACULTY OF INFORMATION TECHNOLOGY AND QUANTITATIVE SCIENCES MARA UNIVERSITY OF TECHNOLOGY SHAH ALAM, SELANGOR MAY 2006
COMPARISON OF VIRTUAL MACHINES USING DIFFERENT VIRTUALIZATION TECHNOLOGIES ON LINUX PLATFORM By: MOHD KHAIRIL BIN MOHD APPANDI 2004219861 A Project Paper Submitted To FACULTY OF INFORMATION TECHNOLOGY AND QUANTITATIVE SCIENCES MARA UNIVERSITY OF TECHNOLOGY In Partial Fulfilment of Requirement For The BACHELOR OF SCIENCE (HONS.) IN DATA COMMUNICATIONS AND NETWORKING Approved By The Examining Committee: PUAN ZARINA BINTI ZAINOL Project Supervisor ENCIK ADZHAR BIN ABDUL KADIR Examiner MARA UNIVERSITY OF TECHNOLOGY SHAH ALAM, SELANGOR MAY 2006
DECLARATION This declaration is to clarify that all of the submitted contents of this research are original in its stature excluding those which have been acknowledged specifically in the references. All of the contents in this project are being submitted as a part of partial fulfilment of Bachelor of Science (Hons.) in Data Communications and Networking programme. I hereby declare that this research together with all of its contents is no other than those of my own works, except for some information taken and extracted from other sources that have been quoted respectively. MAY 2006 MOHD KHAIRIL BIN MOHD APPANDI 2004219861 ii
ACKNOWLEDGEMENTS In the name of ALLAH, the Most Gracious and the Most Merciful First and foremost, I would like to pay my gratitude to Allah S.W.T. for giving me the will and strength to accomplish my final research paper. Thank you for giving me the guidance and courage in order to complete this research paper as well. Very special thanks to Puan Zarina Binti Zainol and Encik Adzhar Bin Abdul Kadir who acted as my project supervisor and examiner respectively, for their ideas, patience, supports and guidelines to come up with this research project. Indeed, their advice and continuous encouragement made the completion of this research possible. Not forgotten to Prof. Madya Dr. Saadiah Binti Yahya, the lecturer for the subject of ITT 560 and Puan Noorhayati Binti Mohamed Noor for their valuable explanation and guidelines given in order to carry out this research project. Indeed, all of these elements led to the better quality of the research paper itself after all. With no exception, a million of thanks go to my beloved family for their morals and material supports. They never give up teaching and motivating me to become who I am today indeed. Thank you very much. I also would like to extend the appreciation to all my fellow friends especially the CS225 final year students who were willing to share their opinions and experiences together throughout the whole project completion. You guys are great and fantastic! Last but not least, to all people that I forgot to mention here, who have contributed towards the completion of this research project either directly or indirectly. Your kindness and cooperation in completion of this final year project paper are very much appreciated. Thank you very much in advance and May Allah bless all of you. MOHD KHAIRIL BIN MOHD APPANDI 2004219861 iii
TABLE OF CONTENTS PAGE TITLE i DECLARATION ii ACKNOWLEDGEMENTS iii TABLE OF CONTENTS iv LIST OF TABLES vii LIST OF FIGURES viii LIST OF ABBREVIATIONS ix ABSTRACT x CHAPTER 1: INTRODUCTION 1.1 INTRODUCTION 1 1.2 PROBLEM STATEMENT 2 1.3 OBJECTIVES OF THE PROJECT 2 1.4 SCOPE OF THE PROJECT 3 1.5 SIGNIFICANCE OF THE PROJECT 3 CHAPTER 2: LITERATURE REVIEW 2.1 INTRODUCTION 4 2.2 VIRTUALIZATION 4 2.2.1 Introduction to Virtualization 4 2.2.2 Virtualization History 5 2.2.3 Virtualization Components 6 2.2.4 Virtualization Levels 7 2.2.5 Virtualization Software 7 2.2.6 Benefits of Virtualization 9 2.3 VIRTUAL MACHINES 11 2.3.1 Introduction To Virtual Machines 11 2.3.2 Virtual Machines History 13 2.3.3 Virtual Machines Advantages 13 2.3.4 System Requirements 15 iv
2.4 VIRTUALIZATION TECHNOLOGY 17 2.4.1 Introduction to Virtualization Technology 17 2.4.2 Virtualization Technology Applications 18 2.4.2.1 Introduction 18 2.4.2.2 VMware Virtualization Technology Applications 18 2.4.2.3 Parallels Virtualization Technology Applications 19 2.4.2.4 Intel Virtualization Technology Applications 20 2.4.2.5 HP Virtualization Technology Applications 22 2.5 VIRTUALIZATION PLATFORM 23 2.5.1 Linux Platform 25 2.6 SIMILAR PROJECTS 27 2.6.1 Survey of Virtual Machines Research 27 2.6.2 Scalability Comparison of 4 Host Virtualization Tools 28 2.6.3 Intel Virtualization Technology: A Primer 29 2.7 SUMMARY 30 CHAPTER 3: METHODOLOGY 3.1 INTRODUCTION 31 3.2 INFORMATION GATHERING 33 3.2.1 Data Collection 33 3.2.1.1 Non-Electronic Research 33 3.2.1.2 Electronic Research 34 3.3 ANALYSIS 34 3.3.1 Hardware Requirements 35 3.3.2 Software Requirements 35 3.4 DEVELOPMENT 42 3.4.1 Software Installations and Configurations 44 3.5 TESTING AND IMPLEMENTATION 47 3.5.1 Running The Virtual Machines 47 3.5.1.1 Running VMware Workstation 5.5 48 3.5.1.2 Running Parallels Workstation 2.1 49 3.5.1.3 Running Both The Virtual Machines 50 v
3.5.2 Monitoring The Virtual Machine Performance 51 3.5.2.1 Using Manual Timing 52 3.5.2.2 Using MB-Timer 1.0 52 3.5.2.3 Using Windows Task Manager 52 3.5.2.4 Using Windows System Monitor 53 3.6 DOCUMENTATION 54 3.7 CONCLUSION 54 CHAPTER 4: RESULTS AND FINDINGS 4.1 INTRODUCTION 55 4.2 VIRTUAL MACHINE PERFORMANCE RESULTS 56 4.2.1 Guest OS Installation Time 56 4.2.2 Boot Time 57 4.2.3 Memory Usage 59 4.2.4 % Processor Time 61 4.2.5 Handle Count 61 4.2.6 Virtual Bytes 61 4.3 COMPARISON OF DIFFERENT VIRTUALIZATION 65 TECHNOLOGIES 4.4 FINDINGS 66 4.5 CONCLUSION 68 CHAPTER 5: CONCLUSIONS AND RECOMMENDATIONS 5.1 INTRODUCTION 69 5.2 CONCLUSIONS 69 5.3 RECOMMENDATIONS 70 REFERENCES 71 APPENDICES 72 Appendix A: Windows XP Professional (Host OS) Interface Appendix B: Fedora Core 4 (Guest OS) Interface Appendix C: Virtualization Terms Appendix D: Gantt Chart vi
LIST OF TABLES TABLE TITLE PAGE 2.1 Project Similarities and Differences 30 3.1 Hardware and Software Requirements 34 4.1 Guest OS Installation Time Results 56 4.2 Boot Time Results 58 4.3 VM Memory Usage Results 60 4.4 VM % Processor Time Results 63 4.5 VM Handle Count Results 64 4.6 VM Virtual Byte Results 64 4.7 Comparison between VMware Workstation 5.5 65 Technology and Parallels Workstation 2.1 Technology 4.8 Summary of Virtual Machine Metrics Performance 66 vii
LIST OF FIGURES FIGURE TITLE PAGE 2.1 Server Virtualization Usage 10 2.2 Virtual Machine (VM) Concept 12 2.3 The Virtual Machine Illustration of This Research Paper 16 3.1 Research Methodology Diagram 32 3.2 Development Process Flowchart 43 3.3 VMware Workstation 5.5 Is Running FC4 48 (Virtual Machine 1) 3.4 Parallels Workstation 2.1 Is Running FC4 49 (Virtual Machine 2) 3.5 Both The Virtual Machines Are Running Multiple 50 Operating Systems (OSes) Simultaneously In A Single Hardware Platform (PC) 4.1 MB-Timer 1.0 Is Showing The Boot Time For FC4 57 (VMware) 4.2 MB-Timer 1.0 Is Showing The Boot Time For FC4 58 (Parallels) 4.3 VMware Workstation 5.5 Memory Usage 59 4.4 Parallels Workstation 2.1 Memory Usage 60 4.5 VMware Workstation 5.5 Metrics Performance Results 62 4.6 Parallels Workstation 2.1 Metrics Performance Results 63 viii
LIST OF ABBREVIATIONS CPU FAQs Central Processor Unit Frequently Asked Questions FC4 Fedora Core 4 IT MMC NIC OS PC QoS RAM VM VMM VS WinXP Pro Information Technology Microsoft Management Console Network Interface Card Operating System Personal Computer Quality of Service Random Access Memory Virtual Machine Virtual Machine Monitor Virtualization Software Windows XP Professional ix
ABSTRACT In computing, virtualization involves the process of presenting computing resources whereby users and applications can easily get value out of them, rather than presenting them in a way dictated by their implementation, geographic location, or physical packaging. In other words, it provides a logical rather than physical view of data, computing power, storage capacity, and other resources. Basically, this research project is concerned with comparison of two virtual machines using two different virtualization technologies (VMware and Parallels) that run on the same Linux platform (Fedora Core 4) in a single personal computer (PC) in order to show the performance respectively. It uses the host-guest virtualization approach whereby Windows XP Professional acted as a host operating system while both virtual machines appears to be guest operating systems (Fedora Core 4) respectively. In this case, it applies the VMware and Parallels virtualization technology that acted as virtual machine monitors (VMMs) or hypervisors to create such a great environment in a single hardware platform (PC). As for the performance, the virtual machines are able to run multiple operating systems in a single hardware platform in order to reduce the cost of the real hardware and software. In addition to that, it uses a variety of virtualization techniques like simulation, emulation, and hardware or software partitioning of the resources. During the project development, there are five methodologies used in order to accomplish this research paper. They are started with the information gathering, analysis, development, testing and implementation, and ended by the documentation. x
CHAPTER 1 INTRODUCTION 1.1 INTRODUCTION Over the years, computers have become sufficiently powerful to use virtualization to create the illusion of many smaller virtual machines, each running a separate operating system instance. Basically, a virtual machine, or VM, is a layer of software that runs on top of a virtualization management layer and encapsulates entire independent software stack of an operating system and various applications. Since multiple VMs can be loaded on a computer, multiple operating systems and applications can run simultaneously on a single unit. Meanwhile, virtualization can be defined as technologies that allow software applications to view computing resources, typically server hardware or storage systems, as either many smaller units (partitioning) or multiple units grouped together to appear as one larger system (clustering). Virtualization essentially allows software to separate from the physical hardware. The end result is that Information Technology (IT) departments are able to optimize their operations by flexibly adding, subtracting, mixing, and matching hardware and software resources to enhance efficiency and reliability. Although this technology has been around for decades in mainframe computers and various flavors of UNIX, only in the past few years has it become widely available for use on the increasingly popular Wintel and Lintel platforms, as the X86 chips have not previously been conducive to virtualization. There are several kinds of Virtual Machines (VMs) which provide similar features, but differ in the degree of abstraction and the methods used for virtualization. 1
1.2 PROBLEM STATEMENT In the past, it was common for developers to need multiple computers, each running a different operating system to test their work. Besides that, the traditional scenario in which systems and capacity are fixed and resources are often over-provisioned to meet peak demands. With virtual machines, they could consolidate them to one workstation since Information Technology (IT) infrastructures today require simplicity, agility, and value to enhance the competitive advantage of the organizations they serve. By reducing complexity, increasing resource utilization, and lowering costs, businesses can acquire the flexibility to devote more of their attention to new opportunities and less on maintenance and management. That is the principle behind virtualization technologies. Overall, there are four main problems to be solved in this research project. They are concerned with high hardware cost, inefficiency, expensive and timeconsuming maintenance, and massive operating costs. Hopefully, by using these virtualization technologies, it can solve the problems more effectively and efficiently. 1.3 OBJECTIVES OF THE PROJECT Project objectives are the most crucial part of the research paper. Thus, it is very important to state the objectives clearly. From the objectives, the target could be determined to accomplish the research project. In addition, it is necessary to make sure that the project is on the right track. Basically, there are two (2) main principles or objectives of this research. The objectives are listed as follows: 1) To compare two virtual machines using two different virtualization technologies in order to show the performance respectively. 2) To run multiple operating systems in a single hardware platform (PC) in order to reduce the cost of the real hardware and software. 2
1.4 SCOPE OF THE PROJECT Virtual machines using virtualization technologies are a very wide topic to cover. Due to the constraints in running this project, it is also important to clarify the project s scope in order to make the project achievable. The scopes of the project are as follows: 1) It only involves two different virtualization technologies to create two virtual machines. 2) It uses the host-guest virtualization approach. 3) It applies Windows platform as the host operating system and Linux platform as the guest operating system. 1.5 SIGNIFICANCE OF THE PROJECT The main significance of this research is as follows: 1) Hardware expenses can be reduced since there is no need to dedicate an entire machine to a single operating system. 2) The amount of hardware we have to manage can be reduced and a replication problem can be solved. 3) Time can be reduced since software development can almost completely overlap with hardware development. 4) The level of the security can be enhanced since each virtual machine is totally independent, an infected or attacked virtual machine can easily be shut down, thus minimizing damage to other critical systems. 3
CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION The aim of this chapter is to provide a theoretical background that is related to this project. It focuses on the definition of relevant information and the technology being used. Having the trusted and useful information will lead to the better understanding on this project. 2.2 VIRTUALIZATION 2.2.1 Introduction to Virtualization Virtualization is an abstraction layer that decouples the physical hardware from the operating system to deliver greater IT resource utilization and flexibility. It is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Virtualization involves the process of presenting computing resources in ways that users and applications can easily get value out of them, rather than presenting them in a way dictated by their implementation, geographic location, or physical packaging. In other words, it provides a logical rather than physical view of data, computing power, storage capacity, and other resources. (Andrew Binstock, 2004) Virtualization allows multiple virtual machines, with heterogeneous operating systems to run in isolation, side-by-side on the same physical machine. Each virtual machine has its own set of virtual hardware (e.g., RAM, CPU, NIC, etc.) upon which an operating system and applications are loaded. The operating system sees a consistent, normalized set of hardware regardless of the actual physical hardware components. 4
2.2.2 Virtualization History Virtualization was first introduced in the 1960s to allow partitioning of large, mainframe hardware scarce and expensive resource. Over time, minicomputers and PCs provided a more efficient, affordable way to distribute processing power, so by the 1980s, virtualization was no longer widely employed. In the 1990s, researchers began to see how virtualization could solve some of the problems associated with the proliferation of less expensive hardware, including underutilization, escalating management costs and vulnerability. In the mid 1960s, the IBM Watson Research Center was home to the M44/44X Project, the goal being to evaluate the then emerging time sharing system concepts. The architecture was based on virtual machines: the main machine was an IBM 7044 (M44) and each virtual machine was an experimental image of the main machine (44X). The address space of a 44X was resident in the M44's memory hierarchy, implemented via virtual memory and multiprogramming. (Amit Singh, 2005) IBM had provided an IBM 704 computer, a series of upgrades (such as to the 709, 7090, and 7094), and access to some of its system engineers to MIT in the 1950s. It was on IBM machines that the Compatible Time Sharing System (CTSS) was developed at MIT. The supervisor program of CTSS handled console I/O, scheduling of foreground and background (offline-initiated) jobs, temporary storage and recovery of programs during scheduled swapping, monitor of disk I/O, etc. The supervisor had direct control of all trap interrupts. Around the same time, IBM was building the 360 family of computers. MIT's Project MAC, founded in the fall of 1963, was a large and well-funded organization that later morphed into the MIT Laboratory for Computer Science. Project MAC's goals included the design and implementation of a better time sharing system based on ideas from CTSS. This research would lead to Multics, although IBM would lose the bid and General Electric's GE 645 would be used instead. 5
Regardless of this loss, IBM has been perhaps the most important force in this area. A number of IBM-based virtual machine systems were developed: the CP-40 (developed for a modified version of IBM 360/40), the CP-67 (developed for the IBM 360/67), the famous VM/370, and many more. Typically, IBM's virtual machines were identical copies of the underlying hardware. A component called the virtual machine monitor (VMM) ran directly on real hardware. Multiple virtual machines could then be created via the VMM, and each instance could run its own operating system. IBM's VM offerings of today are very respected and robust computing platforms. 2.2.3 Virtualization Components According to Amith Singh, specific virtualization components are as follows (but not limited to): Virtualized system calls Virtualized uid 0 (each instance has its own root user) Fair share network scheduler Per-virtual OS resource limits on memory, CPU and link Virtual sockets and TLI (including port space) Virtual NFS Virtual IP address space Virtual disk driver and enhanced VFS (each instance sees its own physical disk that can be resized dynamically, on which it can create partitions) Virtual System V IPC layer (each instance gets its own IPC namespace) Virtual /dev/kmem (each instance can access /dev/kmem appropriately without compromising other instances or the system) Virtual /proc file system (each instance gets its own /proc with only its processes showing up) Virtual syslog facility Virtual device file system Per-instance init Overall system management layer 6
2.2.4 Virtualization Levels Hewlett Packard (HP) offers a broad range of virtualization solutions spanning the Microsoft Windows, UNIX, and Linux operating environments. All levels element virtualization, integrated virtualization, and complete IT utility are designed to produce a more optimized infrastructure. Element virtualization is a logical first step on the virtualization journey, where the utilization of individual servers, storage, networking, software, printers, and clients is dramatically increased to meet demand within a single application environment or business process. Integrated virtualization is the optimization of multiple infrastructure elements within a single application environment or business process to meet service-level agreements automatically. An example is the HP Virtual Server Environment (VSE), in which virtual servers automatically grow and shrink based on the service-level objectives set for each application they host. Virtualization s ultimate desired end state is the complete IT utility, in which all heterogeneous resources are pooled and shared across applications and business processes so that supply meets demand in real time. A complete IT utility leverages virtualization, management, and automation, and includes sourcing and financing options. 2.2.5 Virtualization Software Virtualization software (VS) is a way of running multiple operating systems on the same computer, all at the same time. It is like having many computers inside one computer. (Joseph D. Foran, 2005) Traditional methods of running multiple operating systems (by partitioning the hard drive and creating a dual-boot) have two main limitations: only one OS can run at a time, and the physical hardware on the computer limits the choices. (For instance, users cannot run Mac OS on a PC in most cases). 7
Virtualization changes this because the software runs as an application on the computer and emulates the hardware, so hardware compatibility is not an issue. Simply start the virtualization program, and it pretends to be a computer. Each operating system installed on a PC will act as a new computer. Virtualization software (VS) is a software application, much like Word, Excel, or Firefox. To get started, power up the computer, insert an operating system's install disk into the DVD or CD drive, and install the guest operating system(s). When operating systems are installed in an emulated hardware environment, they are called guest operating systems or Virtual Machines, or VMs, while the main operating system is called the host OS. Using VS greatly cuts the costs of setup and breakdown time for testing any kind of software development it is like having a lab of ten systems, all on one box. For example, some IT departments will install a standardized version of Windows that can also set up all of user's programs automatically. Naturally, when necessary changes or upgrades will alter the systems, testing is required. To do this without VS, a lab would need to set up with computers, network gear, and other expensive hardware. With VS, the new build process can be tested quickly, reliably, and consistently. Most commercial VS packages are easy to set up, but take some tweaking to perform at top speed. Most Open Source packages, however, still require heavy tweaking. For example, Xen requires a whole different setup to be completed before installing Mac OS. There are several vendors offering varying types of VS software, some packages cost thousands of dollars while others are Open Source programs that cost nothing. The application that fits the needs depends on how many computers available, what sort of works done, the level of technical expertise, and what kind of technology support needed. 8
2.2.6 Benefits of Virtualization Virtualization is gaining widespread adoption due to its indisputable customer benefits. Basically, there are 3 main benefits of virtualization: 1) Partitioning Partitioning is the splitting of a single, usually large, resource (such as disk space or network bandwidth) into a number of smaller, more easily utilized resources of the same type. This is sometimes also called zoning. Multiple applications and operating systems can be supported within a single physical system. Servers can be consolidated into virtual machines on either a scale-up or scale-out architecture. Computing resources are treated as a uniform pool to be allocated to virtual machines in a controlled manner. 2) Isolation Virtual machines are completely isolated from the host machine and other virtual machines. If a virtual machine crashes, all others are unaffected. Data does not leak across virtual machines and applications can only communicate over configured network connections. 3) Encapsulation Complete virtual machine environment is saved as a single file; easy to back up, move, and copy. Standardized virtualized hardware is presented to the application - guaranteeing compatibility. 9
Figure 2.1: Server Virtualization Usage 10
2.3 VIRTUAL MACHINES 2.3.1 Introduction to Virtual Machines Virtual machine is a term used by Sun Microsystems, developers of the Java programming language and runtime environment, to describe software that acts as an interface between compiler Java binary code and the microprocessor (or hardware platform ) that actually performs the program's instructions. Once a Java virtual machine has been provided for a platform, any Java program (which, after compilation, is called bytecode) can run on that platform. Java was designed to allow application programs to be built that could be run on any platform without having to be rewritten or recompiled by the programmer for each separate platform. Java's virtual machine makes this possible. The Java virtual machine specification defines an abstract rather than a real machine (or processor) and specifies an instruction set, a set of registers, a stack, a garbage heap", and a method area. The real implementation of this abstract or logically defined processor can be in other code that is recognized by the real processor or be built into the microchip processor itself. The output of compiling a Java source program (a set of Java language statements) is called bytecode. A Java virtual machine can either interpret the bytecode one instruction at a time (mapping it to a real microprocessor instruction) or the bytecode can be compiled further for the real microprocessor using what is called a just-in-time compiler. At IBM, a virtual machine is any multi-user shared-resource operating system that gives each user the appearance of having sole control of all the resources of the system. It is also used to mean an operating system that is in turn managed by an underlying control program. Thus, IBM's VM/ESA can control multiple virtual machines on an IBM S/390 system. Elsewhere, virtual machine has been used to mean either an operating system or any program that runs a computer. A running program is often referred to as a virtual machine - a machine that does not exist as a matter of actual physical reality. The virtual machine idea is 11
itself one of the most elegant in the history of technology and is a crucial step in the evolution of ideas about software. To come up with it, scientists and technologists had to recognize that a computer running a program is not only a washer doing laundry. A washer is a washer whatever clothes that put inside, but when a new program is put in a computer, it becomes a new machine. A virtual machine is an environment which appears to be a guest operating system as hardware, but is simulated in a contained software environment by the host system. The simulation must be robust enough for hardware drivers in the guest system to work. (Wikipedia, 2005) Figure 2.2: Virtual Machine (VM) Concept 12
2.3.2 Virtual Machines History In the late 1960s, VM was the first virtual machine environment, which was developed for the IBM System/360 mainframe. Initially performed entirely in software, hardware circuits were added later to provide faster and more robust partitioning between system images. Starting with the Intel 386 in 1985, x86 CPUs have included hardware support for running multiple 16-bit DOS applications. However, there was no hardware-based virtual machine mode for running multiple 32-bit operating systems until Intel announced VT (Vanderpool) in 2004 and AMD announced Pacifica in 2005. 2.3.3 Virtual Machines Advantages The following are some representative reasons for and advantages of virtual machines: Virtual machines can be used to consolidate the workloads of several under-utilized servers to fewer machines, perhaps a single machine (server consolidation). Related benefits (perceived or real, but often cited by vendors) are savings on hardware, environmental costs, management, and administration of the server infrastructure. The need to run legacy applications is served well by virtual machines. A legacy application might simply not run on newer hardware and/or operating systems. Even if it does, if may under-utilize the server, so as above, it makes sense to consolidate several applications. This may be difficult without virtualization as such applications are usually not written to co-exist within a single execution environment. Virtual machines can be used to provide secure, isolated sandboxes for running untrusted applications. Users could even create such an execution environment dynamically - on the fly - as they download something from the Internet and run it. Virtualization is an important concept in building secure computing platforms. Virtual machines can be used to create operating systems, or execution environments with resource limits, and given the right schedulers, resource 13