CloudVMI: Virtual Machine Introspection as a Cloud Service
|
|
|
- Alexina Martin
- 10 years ago
- Views:
Transcription
1 : Virtual Machine Introspection as a Service Hyun-wook Baek, Abhinav Srivastava and Jacobus Van der Merwe University of Utah [email protected], [email protected] AT&T Labs - Research [email protected] Abstract Virtual machine introspection () is a mechanism that allows indirect inspection and manipulation of the state of virtual machines. The indirection of this approach offers attractive isolation properties that has resulted in a variety of -based applications dealing with security, performance, and debugging in virtual machine environments. Because it requires privileged access to the virtual machine monitor, functionality is unfortunately not available to cloud users on public cloud platforms. In this paper, we present our work on the architecture to address this concern. virtualizes the interface and makes it available as-a-service in a cloud environment. Because it allows introspection of users VMs running on arbitrary physical machines in a cloud environment, our -as-a-service abstraction allows a new class of cloud-centric applications to be developed. We present the design and implementation of in the Xen hypervisor environment. We evaluate our implementation using a number of applications, including a simple application that illustrates the cross-physical machine capabilities of. I. INTRODUCTION The flexibility and ease of management that can be realized by virtual machine (VM) technologies have made it a mainstay in modern data centers and enabled the realization of cloud computing platforms. The clean abstraction offered by virtualized architectures has also given rise to the concept of virtual machine introspection () [1]. allows visibility into and control of the state of a running virtual machine by software running outside of the virtual machine. The indirection offered by this approach is quite attractive, especially for security related applications, and as a result a variety of tools have been developed [1], [2], [3]. Because they are allowed access to the system memory, tools typically run in the privileged domain (e.g., dom0 in the Xen architecture). Lib is an open source implementation of supporting commodity hypervisors such as Xen and KVM [4]. Lib provides the functionality of mapping raw memory pages of VMs inside the privileged VM and relies on monitoring software to interpret the contents of these mapped pages. Due to the semantic gap between the information present in the memory pages (raw bits) and the higher-level information used by applications and the operating system (e.g., data structures, files), monitoring software builds the higher-level abstractions by using knowledge of the internals of application or operating system running inside the VM [5]. For this reason, based monitoring software gets closely tied with the specific operating system, application type, version, or distribution. Developing and executing -based monitoring applications in public clouds is more challenging compared to self-virtualized data centers. In a self-virtualized data center, customers have complete control over the privileged VM. Unfortunately, users in a public cloud computing platform do not have access to the privileged domain which is under the control of cloud providers. As a result, cloud users cannot deploy or configure custom tools without the help of cloud providers. Given that cloud platforms are the predominant manner in which customers use VMs, this is a serious impediment. From a cloud provider s perspective, offering tools to customers is painstaking. Given that there are so many operating systems, versions, or distributions that can be executed inside VMs, supporting all of them are neither feasible nor a scalable service for providers. In this paper, we present our work on to address these concerns. allows to be offered as a cloud service by a cloud provider to cloud customers. Specifically, splits the tool development process into two components. The first component that implements the Lib functionality to map memory pages is implemented by the cloud provider. The second component that builds the higherlevel semantics from the lower-level information present on mapped pages is implemented by the customer. The focus of the work presented in this paper is to look into the first part of how cloud providers will allow users to perform privileged memory mapping functionality. In essence, allows cloud users to request introspection privileges for their VMs running on the cloud platform. This design solves the aforementioned problems: (i) By virtualizing the Lib interface and allowing cloud customers to safely invoke it to request introspection obviates the need of having access to the privileged VM. (ii) Requiring customers to bring their own policies and implementation of monitoring software relieves cloud providers from supporting many application and OS distributions and versions. We present the design and implementation of the - architecture. virtualizes the Lib interface and presents the resulting vlib to customers -based tools in a cloud environment. Invocations to vlib are multiplexed and policed according to customer information to ensure that introspection actions are constrained to the cloud resources allocated to the respective cloud user. The vlib interface can be remotely invoked. This functionality enables novel cross-physical-machine introspection capabilities, allowing cloud-centric (as opposed to device-centric)
2 applications. As expected, our evaluation shows a performance impact associated with virtualization and specifically, remote invocations. Nonetheless, our results show adequate performance to facilitate a large class of -based applications. Our implementation utilizes Xen hypervisor-based clouds, however, our approach is general and equally applicable to other VMMs such as KVM. To summarize, we make the following contributions: Realizing the drawbacks of existing design, we design the architecture to allow functionality to be offered as-a-service by a cloud provider. We split the tool design process into two components, allowing cloud providers to offer only privileged memory page mapping functionality and requiring customers to bring their own customized monitoring software. We present the prototype implementation of using the Xen hypervisor and Lib. Our evaluation demonstrates the efficacy of the system by developing a cloud-centric application which utilizes the cross-physical machine introspection capabilities of. II. CLOUD The primary goal of our work is to enable cloud operators to provide capabilities as a cloud service, to enable their users to make use of -based tools to monitor their own VMs, without compromising the security of other cloud users. With we achieve this goal by virtualizing the low level Lib interface and exposing the virtualized interface as-a-service in a cloud environment. Below we first describe the use of in such a cloud environment, before describing the virtualized architecture. Figure 1 depicts a somewhat simplified cloud architecture to illustrate how we achieve the above mentioned goals with. Figure 1 (a) shows two physical machines with capabilities, a simple cloud control architecture and two cloud users invoking the cloud control application programming interface (API) to request the instantiation of three and one VMs, respectively. (To simplify exposition we assume a simple CreateV M() API call and further assume that VMs are named (numbered) from a system wide name space.) Figure 1 (a) shows the fact that the cloud control architecture maintains database entries about users, their virtual resources (VMs) and the physical machines those resources are realized on. Given this setup, Figure 1 (b) shows the interactions involved with users requesting monitoring capabilities for their previously instantiated VMs (using the CreateM onitoringv M() function). Specifically, U ser1 requests monitoring of V M1 and V M3 (but not V M2), while U ser2 requests the monitoring of its single VM instance, V M 4. Based on these requests, the cloud control architecture will instantiate monitoring VMs for the users (M on1 and M on2) respectively, and further configure the policy databases on the instances on the relevant physical machines with information needed to perform the necessary multiplexing/demultiplexing and policing. Note that in our simple example U ser1 requested a single monitoring VM to monitor two of its VMs (V M1 and V M3) that have been instantiated on two separate physical machines. This implies the ability for to allow remote monitoring of VMs. I.e., as shown in the figure, Mon1 is used to monitor V M3 even though the latter is located on a different physical machine. enables this new remote capability, which will allow a new class of -based cloud-centric applications that can perform functions from a centralized perspective on a set of distributed VMs. A. as a Service Fig. 1. API : - PM1, VM1 - PM1, VM2 - PM2, VM3 User2: - PM2, VM4 API : - PM1, VM1 - PM1, VM2 - PM1, Mon1: VM1,VM3 - PM2, VM3 User2: - PM2, VM4 - PM2, Mon2: VM4 CreateVM(,3) CreateVM(User2,1) VM1 VM1 VM2 CreateMonitoringVM(,(VM1,VM3)) CreateMonitoringVM(User2,(VM4)) Mon1 (a) U1: VM1,VM2 (b) VM2 : as a Service VM3 Mon2 VM3 User2 VM4 U1: VM3 U2: VM4 User2 VM4 B. Architecture We describe the architecture in the context of the Xen VMM based environment. The architecture is depicted in Figure 2. As shown in the figure there are two sets of user VMs. VMs being monitored are shown on the left, while the monitoring VMs are shown on the right. vlib: virtualizes the Lib interface and exposes that via the vlib interface. This interface is realized through a combination of the vlib Server Module executing in the privileged dom0 and the vlib Client Library that executes on each of the monitoring VMs. The vlib Client Library provides a Lib-like library against which applications link to invoke the vlib server API. Our initial approach is to have the vlib- library mimic the original Lib interface as closely as possible to simplify porting of applications developed for Lib. However, as we discuss later, we expect the vlib interface to evolve over time to better exploit the fact that offers capabilities across multiple VMs and multiple physical machines in a cloud environment. As shown in Figure 2, the vlib Server Module performs multiplexing and policing of monitoring VM invocations to ensure that such invocations are constrained to monitored
3 domus MonitoredVMs dom0 vlib Server Module Policy DB lib Multiplexing/ Policing Server API Hypervisor and kernel domus Monitoring VMs Application vlib Client Library Remote Monitoring VM vlib Client Library 1.vmi_init_remote (caller_info, params) 4. return c vlib Server "a" vmi_x 10 vmi_y 11 2.vmi_init(target_vm_id) 3. return vmi_z "a" vmi_x 11 "c" vmi_z 0 gc (age>10) gc (age>10) lib Fig. 2. Architecture VMs that are associated with the requesting cloud user (or account). As described in Section II-A these actions are driven from a policy database that the cloud control architecture maintains. Under the hood the Server Module uses existing Lib [4] functionality to introspect monitored VMs. Figure 2 also illustrates the fact that allows VMs in the local physical machine to be monitored by applications executing in remote monitoring VMs. This is enabled by exporting the vlib interface as a server API that can be invoked remotely. State Management: For the original Lib and its applications, a instance is typically created, which serves as an application-level handle to refer to the state maintained by the underlying library. This instance is used by the application to perform introspection actions, typically across multiple successive introspection requests. Dealing with this state creation and maintenance requires special attention in. In, the actual state associated with accessing VM memory pages is maintained in the vlib Server Module, while the client handle referencing that state is exposed to applications running on the monitoring VMs via the vlib Client Library. applications that do not cleanly terminate and release state associated with its instance handles might result in stale state and memory leakage on the server side. A naive solution for this would be to encapsulate each API call with a pair of instance create and destroy functions. With such an approach every Lib function creates a instance first, performs the desired function and destroys the instance. The problem with this solution is that it incurs the cost of creation and destruction with every Lib function call. Further, instance creation is relatively heavy-weight compared to other functions: instance creation takes about ten milliseconds, which is three to four orders of magnitude longer than invocation of other functions. Instead of this secure-but-costly solution, manages a hash table for instances and performs garbagecollection for those that are not used for a certain time. Figure 3 illustrates how manages this hash table. maintains a hashtable for instances, and each entry consists of a instance address, a key, the age of the entry since it was used for the last time and the parameters used for its creation and modification. When a user requests to create a new instance, the server creates the instance, pushes it into the hashtable and returns its key (Figure 3, flow 1 to flow 4). Then, the user can use the key to access the instance as they use the pointer value of instance with Fig vmi_get_memsize_remote (caller_info, params={vmi: c }) "a" "c" vmi_w return 11. vmi_destroy_remote (caller_info, params={vmi: c }) 14. return "a" "c" vmi_z vmi_get_memsize(vmi_w) 9. return 12. vmi_destroy(vmi_w) 13. return gc (age>10) 6. vmi_init(target_vm_id) 7. return vmi_w "a" vlib Server Module state management the original Lib. The indirection provided by this approach has the additional security benefit that actual memory location information is not revealed to the monitoring applications. A garbage collector periodically checks the ages of instances since it was last used. instances that have not been accessed for a specified time are destroyed by the garbage collector. Maintaining the hashtable entry involves a limited amount of state. Therefore, the garbage collector does not delete the hashtable entry, but maintains its key value and parameter information used for creation and modification. The information is used when the client who created the instance requests to use the instance again. When the server receives a invocation on a garbage-collected instance, the server implicitly recreate the instance based on the previously used parameters. (Figure 3, flows 6 and 7.) This happens only for the instances that are implicitly destroyed - if a user explicitly destroys a instance, the server removes not only the instance, but also its entry in the hashtable (Figure 3, flows 11 to 14). Finally, to prevent memory leakage by forgotten hashtable entries, the Service Module limits the number of instance entries that each user can have. vlib offers an additional function for this feature. users can check the number of instance entries they have and their keys, and they can remove unused entries. III. IMPLEMENTATION Our implementation uses Xen and Lib. The current version is running on Xen 3.0 and uses Lib version 0.9 alpha. As described in Section II-B, consists of two modules, the vlib Client Library and the vlib
4 Service Module. In our current implementation, the vlib interface is realized using the Linux RPC. The client library makes RPC calls to the privileged virtual machine hosting the Service Module for the target virtual machine to be monitored and returns results to the applications. The vlib Service Module is a server program that receives requests from monitoring virtual machines, performs using the underlying Lib interface and returns the results. Our vlib- Service Module is implemented in C++. For the current version of, garbage collection, aging and instance entry managing functions are under implementation, and the hashtable manages only instances and their key values. In addition to the Linux RPC implementation described above, we have a implementation which exposes the vlib interface as a RESTful API. This alternative enables users to perform using HTTP request so that users can introspect their virtual machines regardless of the languages and platform they use. Since it uses HTTP, this implementation has performance limitations, however, we envision that the ease of use of this approach will enable new uses of. Our RESTful implementation is based on Python Flask and the Tornado server. IV. EVALUATION In this section we present our evaluation of the prototype implementation. Our evaluation was performed in the Emulab[6] testbed environment using a pair of Linux physical machines interconnected via a 100 Mbps link. Each physical machines was equipped with 12-Gigabyte RAM and a 64-bit Intel Quad Core Xeon E5530 CPU. Both nodes were running Ubuntu Linux with Xen 3.0. We used Ubuntu Linux for the VMs in our evaluation. Below we present three evaluation results. First, Section IV-A describes a simple application that we developed to illustrate the cross-physical-machine introspection capabilities provided by. Section IV-B presents a macro-level evaluation of by comparing the performance of a number of simple applications using with the same applications using the original Lib functionality. This evaluation also demonstrates the ease with which applications developed to use the original Lib can be ported to. Finally, in Section IV-C we present a detailed micro-benchmark evaluation of our prototype implementation. A. Cross-physical-machine A unique feature of is that it can introspect virtual machines running on remote physical machines. The target monitored VM can be either a single virtual machine or multiple machines spread throughout a cloud infrastructure, allowing the centralized monitoring of virtual machine instances in a cloud environment. Moreover, compatibility with the original Lib interface will allow existing -based applications to be readily ported and deployed in a cloud environment. As an illustrative use case of this functionality, we have modified the module-list example distributed with Lib to show the list of installed kernel modules in several virtual machines located in different physical machines. This example illustrates the scenario where multiple virtual machines are associated with a single cloud user (or account) and the application monitors the kernel modules associated with all VMs from a centralized monitoring applications. Such functionality might for example be a building block in a cloud security monitoring or cloud forensic application. B. Macro-benchmark TABLE I. APPLICATION AVERAGE RUNNING TIMES Lib Cross-VM Cross-PM module-list ms ms ms process-list ms ms ms To evaluate the overall performance of the, we first ran two existing Lib examples on and the original Lib and compared the result. The examples we used here are process-list.c and module-list.c which are distributed with Lib. Each of these examples is a console-based application that shows the list of installed kernel modules or the list of running processes of the target virtual machine. We ran these two examples under three different scenarios: (i) using the original-lib, (ii) using in a cross-vm setup on the same physical machine and (iii) using in a cross physical machine (cross-pm) setup. Specifically, for the original Lib case, we ran the examples on the dom0 and introspected a customer VM (domu in Xen s terminology) in the same physical machine. For the cross-vm case, we ran the examples in a domu and introspected another domu within the same physical machine. For the cross-pm case, we ran the examples on a domu and introspected another domu in a different physical machine. With reference to Figure 1 (b), the Cross-VM setup is the case where Mon1 introspects VM1, and the Cross-PM case is when Mon1 introspects VM3. Table I shows the runtime for each application, averaged over a thousand runs. The target (monitored) virtual machine for this evaluation was a Ubuntu system running with 3 kernel modules and 53 processes. In this static setup, the number of Lib API function calls invoked by each example is fixed. Specifically, the number of Lib API function calls were 19 for module-list and 173 for processlist. This explains why process-list takes significant longer time to finish than module-list in the cases. More importantly, however, these results suggest that the serial nature of the original Lib API might need to be refactored to better suit a cloud-centric. The performance is of course also sensitive to the size of request parameters and the resulting return values. The API functions called by module-list have fewer parameters and small return values (i.e., primitive variable type) compared to process-list. Naturally, heavier API functions like vmi read pa( ), which returns specified size of physical memory in binary will introduce more significant overhead. C. Micro-benchmark To evaluate the performance of in more detail, we have measured invocation times for several different
5 Fig. 4. Invocation Time Sections in API functions: vmi init(), vmi destroy(), vmi get memsize(), vmi pause vm() and vmi resume vm(). For each function, we have called it hundred thousand times and calculated the average invocation times. As shown in Figure 4, the invocation times were measured at four different points in the architecture. With reference to Figure 4, high-level API invocation time means the elapsed time from calling a Lib-like function (i.e., vlib function) to its return in the user level application. RPC section is the elapsed time from calling a Linux RPC function to its return in vlib library code. Server Service Routine invocation time is the time from the start of the mapped service routine for the API function to its return. Finally, low-level API invocation time is the time taken to perform the original Lib function in the server. TABLE II. TABLE III. INVOCATION TIMES OF LIB FUNCTIONS Low-level API calling vmi init & vmi destroy µs vmi get memsize 0.64 µs vmi pause vm 1.79 µs vmi resume vm 1.66 µs INVOCATION TIMES OF CLOUD FUNCTIONS UNDER CROSS-VM SET-UP High-level API calling Low-level API calling vmi init & vmi destroy µs µs vmi get memsize µs 0.77 µs vmi pause vm µs 3.53 µs vmi resume vm µs 3.22 µs TABLE IV. INVOCATION TIMES OF CLOUD FUNCTIONS UNDER CROSS-PM SET-UP high-level API calling low-level API calling vmi init & vmi destroy µs µs vmi get memsize µs 1.13 µs vmi pause vm µs 5.68 µs vmi resume vm µs 5.05 µs As with the macro evaluation in section IV-B, this evaluation has been performed under three different settings, namely original Lib, cross-vm (same physical machine) and cross-pm (cross-physical machine). Table II, III, and IV show the results at the user-level. For the original Lib case, it is the time for performing the original Lib functions ( Low-level API invocation time), and, for cases, it is the High-level API invocation time. We also measured the times for the original Lib functions during each evaluation and present this together with the result of interest for reference. Each entry is the average invocation time to perform a Lib-like (or Lib) function. For vmi init() and vmi destroy(), we have paired them and measured the invocation time together. Here, we can see the overhead added by to the original Lib functionality. The overhead varies depending on the data size of parameters and return values sent and received, but it is around 60 microseconds for cross-vm case and around 180 microseconds for cross-pm case. Note that the entries for vmi init() and vmi destroy() have doubled overhead because the entries are the sum of invocation times of the two functions. Also, the parameter and return value size of vmi init() is relatively larger than other API functions. TABLE V. DETAILED INVOCATION TIMES FOR CROSS-VM SET-UP vmi gem memsize( ) vmi pause vm( ) vmi resume vm( ) Low-level API 0.77 µs 3.53 µs 3.22 µs Service Routine 3.71 µs 6.19 µs 5.91 µs RPC µs µs µs High-level API µs µs µs Table V shows more detailed performance information. We note that the major overhead is due to the network interaction associated with the RPC. The average overhead attributable to RPC is about 50 microseconds. We verified that this overhead corresponds to the inherent overhead associated with Linux RPC in our evaluation environment. TABLE VI. INVOCATION TIMES OF RESTFUL CLOUD FUNCTIONS UNDER CROSS-VM SET-UP high-level API calling low-level API calling vmi get memsize µs 2.50 µs As mentioned in Section III, exports the vlib- interface as a RESTful API. Table VI shows an example of the performance of the RESTful API. This result can be compared with the vmi get memsize entries in Table II and III. As might be expected, the RESTful API introduces far more overhead than API using RPC. The overhead comes from the many different layers of a RESTful API realization: A request is encoded in JSON, transmitted via HTTP, decoded at the server and passed to a Python server before the original Lib interface is invoked. Likewise, results return via the reverse path. V. RELATED WORK Monitoring virtual machines has been an active area of research. Researchers have proposed various solutions to monitor VMs for attack detection [7], [8], [9], [10], malware analysis [11], [12], [13], debugging [14], and performance [15]. Garfinkel et al. [1] developed a system called Livewire that uses techniques to inspect the runtime security state of virtual machines. Payne et al. [3] created a similar system named XenAccess for VMs running atop the Xen hypervisor [16]. Later, the XenAccess introspection library was ported to other VMMs such as KVM and named Lib. Realizing the
6 efficacy of techniques, many security applications using have been developed. Srivastava et al. [2] proposed a white-list based application-level firewall for the virtualized environment that performs introspection for each new connection to identify the process bound to the connection. Ziang et al. [17] developed out-of-the-box security approach to detect attacks ocuring inside a VM. While these approaches have demonstrated the usefulness of, they were developed for the self-hosted virtualized environment and did not take the complexity of elastic computing environment such as clouds into account. In contrast, with, we offer -as-a-cloud service, empowering cloud users to develop their own customized monitoring applications. Further, existing approaches of was device-centric, i.e., limited to the VMs on a single physical machine. In contrast, allows cloud-centric introspection by enabling virtual machine introspection across physical machines. Realizing that the security concern is the main obstacle in the wider adoption of cloud computing, researchers have proposed various ways to offer security-as-a-service in the cloud. Srivastava et al. [18] have proposed the notion of cloud app market that describes the delivery of security solutions via apps similar to smartphone apps. To expose hypervisorlevel functionality to cloud users, they suggested modifications to virtual machine monitors and/or nested virtualization. Butt et al. [19] created a self-service cloud platform that allows cloud customers to flexibly deploy their own security solutions. Srivastava et al. [20] created a trusted VM snapshot generation service in the cloud that operates even if cloud administators are malicious. Brown et al. [21] offered trusted platform-as-aservice for cloud customers to deploy applications in the cloud in a trustworthy manner. Similar to these efforts, we envision -as-a-service in the cloud to allow cloud users to develop security monitoring applications customized for their VMs. VI. CONCLUSION AND FUTURE WORK We presented our work on which allows virtual machine introspection to be offered as-a-service by cloud providers. virtualizes the low level interface and polices access to ensure actions are constrained to resources owned by the respective cloud users. Further, allows actions to be performed across different physical machines, thus allowing for cloud-centric introspection. We presented an evaluation of our proof-of-concept implementation of. Our results prove the feasibility of offering as a cloud service and specifically making cloud-centric compared to the current device-centric approaches. Our results also suggest that compatibility with the existing Lib will simplify porting of -based tools to the environment. At the same time our results indicate that cloud-aware extensions of the interface, for example allowing for batching of requests that might simply be serialized in a traditional non-cloud approach, might be beneficial. We plan to explore such extensions as part of our future work. Finally, we plan to focus future work on the second component of our split -tool development approach by developing more sophisticated applications, including cloud-centric applications which can utilize s unique cross-physical-machine capabilities. REFERENCES [1] T. Garfinkel and M. Rosenblum, A virtual machine introspection based architecture for intrusion detection, in NDSS, San Diego, CA, February [2] A. Srivastava and J. Giffin, Tamper-resistant, application-aware blocking of malicious network connections, in RAID, Boston, MA, September [3] B. D. Payne, M. Carbone, and W. Lee, Secure and flexible monitoring of virtual machines, in ACSAC, Miami, FL, December [4] vmitools Virtual machine introspection tools, p/vmitools/. [5] B. Dolan-Gavitt, T. Leek, M. Zhivich, J. Giffin, and W. Lee, Virtuoso: Narrowing the semantic gap in virtual machine introspection, in Security and Privacy (SP), 2011 IEEE Symposium on, 2011, pp [6] B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, and A. Joglekar, An Integrated Experimental Environment for Distributed Systems and Networks, in OSDI, [7] G. W. Dunlap, S. T. King, S. Cinar, M. A. Basrai, and P. M. Chen, ReVirt: Enabling intrusion analysis through virtual-machine logging and replay, in OSDI, Boston, MA, December [8] B. D. Payne, M. Carbone, M. Sharif, and W. Lee, Lares: An architecture for secure active monitoring using virtualization, in IEEE Symposium on Security and Privacy, Oakland, CA, May [9] L. Litty, H. A. Lagar-Cavilla, and D. Lie, Hypervisor support for identifying covertly executing binaries, in USENIX Security Symposium, San Jose, CA, August [10] A. Srivastava and J. Giffin, Automatic discovery of parasitic malware, in RAID, Ottawa, Canada, September [11] A. Dinaburg, P. Royal, M. Sharif, and W. Lee, Ether: Malware analysis via hardware virtualization extensions, in ACM CCS, Alexandria, VA, October [12] H. Yin, D. Song, M. Egele, C. Kruegel, and E. Kirda, Panorama: Capturing system-wide information flow for malware detection and analysis, in ACM CCS, Arlington, VA, October [13] X. Jiang and X. Wang, Out-of-the-box monitoring of VM-based highinteraction honeypots, in RAID, Surfers Paradise, Australia, September [14] S. T. King, G. W. Dunlap, and P. M. Chen, Debugging operating systems with time-traveling virtual machines, in USENIX Annual Technical Conference, Anaheim, CA, April [15] B. C. Tak, C. Tang, C. Zhang, S. Govindan, B. Urgaonkar, and R. N. Chang, vpath: Precise discovery of request processing paths from black-box observations of thread and network activities, in USENIX Annual Technical Conference, San Diego, CA, June [16] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, Xen and the art of virtualization, in ACM SOSP, Bolton Landing, NY, October [17] X. Jiang, X. Wang, and D. Xu, Stealthy malware detection through VMM-based out-of-the-box semantic view, in ACM CCS, Alexandria, VA, November [18] A. Srivastava and V. Ganapathy, Towards a richer model for cloud app markets, in ACM Computing Security Workshop, [19] S. Butt, A. Lagar-Cavilla, A. Srivastava, and V. Ganapathy, Self-service cloud computing, in ACM CCS, [20] A. Srivastava, H. Raj, J. Giffin, and P. England, Trusted VM Snapshots in Untrusted Infrastructures, in Proceedings of the 15th International Symposium on Research in Attacks, Intrusions and Defenses (RAID), [21] A. Brown and J. Chase, Trusted platform-as-a-service: A foundation for trustworthy cloud-hosted applications, in ACM Computing Security Workshop, 2011.
Evasion Resistant Intrusion Detection Framework at Hypervisor Layer in Cloud
Proc. of Int. Conf. on Advances in Communication, Network, and Computing, CNC Evasion Resistant Intrusion Detection Framework at Hypervisor Layer in Cloud Bhavesh Borisaniya and Dr. Dhiren Patel NIT Surat,
Detecting Computer Worms in the Cloud
Detecting Computer Worms in the Cloud Sebastian Biedermann and Stefan Katzenbeisser Security Engineering Group Department of Computer Science Technische Universität Darmstadt {biedermann,katzenbeisser}@seceng.informatik.tu-darmstadt.de
Survey on virtual machine security
Survey on virtual machine security Bright Prabahar P Post Graduate Scholar Karunya university Bijolin Edwin E Assistant professor Karunya university Abstract Virtualization takes a major role in cloud
Virtual Machines and Security Paola Stone Martinez East Carolina University November, 2013.
Virtual Machines and Security Paola Stone Martinez East Carolina University November, 2013. Keywords: virtualization, virtual machine, security. 1. Virtualization The rapid growth of technologies, nowadays,
Virtualization Technologies (ENCS 691K Chapter 3)
Virtualization Technologies (ENCS 691K Chapter 3) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ The Key Technologies on Which Cloud Computing
CMPS223 Final Project Virtual Machine Introspection Techniques
CMPS223 Final Project Virtual Machine Introspection Techniques Michael Sevilla University of California, Santa Cruz [email protected] December 4, 2012 1 Introduction This work is a survey of Virtual
Run-Time Deep Virtual Machine Introspection & Its Applications
Run-Time Deep Virtual Machine Introspection & Its Applications Jennia Hizver Computer Science Department Stony Brook University, NY, USA Tzi-cker Chiueh Cloud Computing Center Industrial Technology Research
Chapter 2 Addendum (More on Virtualization)
Chapter 2 Addendum (More on Virtualization) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ More on Systems Virtualization Type I (bare metal)
Tamper-Resistant, Application-Aware Blocking of Malicious Network Connections
Tamper-Resistant, Application-Aware Blocking of Malicious Network Connections Abhinav Srivastava and Jonathon Giffin School of Computer Science Georgia Institute of Technology Attacks Victim System Bot
Self-service Cloud Computing
Self-service Cloud Computing Published in Proceedings of ACM CCS 12 Shakeel Butt [email protected] Abhinav Srivastava [email protected] H. Andres Lagar-Cavilla [email protected] Vinod
DACSA: A Decoupled Architecture for Cloud Security Analysis
DACSA: A Decoupled Architecture for Cloud Security Analysis Jason Gionta 1, Ahmed Azab 3, William Enck 1, Peng Ning 1, and Xiaolan Zhang 2 1 North Carolina State University {jjgionta,whenck,pning}@ncsu.edu
Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1
Xen Live Migration Matúš Harvan Networks and Distributed Systems Seminar, 24 April 2006 Matúš Harvan Xen Live Migration 1 Outline 1 Xen Overview 2 Live migration General Memory, Network, Storage Migration
Data Centers and Cloud Computing
Data Centers and Cloud Computing CS377 Guest Lecture Tian Guo 1 Data Centers and Cloud Computing Intro. to Data centers Virtualization Basics Intro. to Cloud Computing Case Study: Amazon EC2 2 Data Centers
Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers
Performance Isolation of a Misbehaving Virtual Machine with Xen, VMware and Solaris Containers Todd Deshane, Demetrios Dimatos, Gary Hamilton, Madhujith Hapuarachchi, Wenjin Hu, Michael McCabe, Jeanna
Securely Isolating Malicious OS Kernel Modules Using Hardware Virtualization Support
Journal of Computational Information Systems 9: 13 (2013) 5403 5410 Available at http://www.jofcis.com Securely Isolating Malicious OS Kernel Modules Using Hardware Virtualization Support Zhixian CHEN
RCL: Software Prototype
Business Continuity as a Service ICT FP7-609828 RCL: Software Prototype D3.2.1 June 2014 Document Information Scheduled delivery 30.06.2014 Actual delivery 30.06.2014 Version 1.0 Responsible Partner IBM
CERIAS Tech Report 2015-9 Basic Dynamic Processes Analysis of Malware in Hypervisors Type I & II by Ibrahim Waziri Jr, Sam Liles Center for Education
CERIAS Tech Report 2015-9 Basic Dynamic Processes Analysis of Malware in Hypervisors Type I & II by Ibrahim Waziri Jr, Sam Liles Center for Education and Research Information Assurance and Security Purdue
Is Virtualization Killing SSI Research?
Is Virtualization Killing SSI Research? Jérôme Gallard Paris Project-Team Dinard November 2007 Supervisor : Christine Morin Co-supervisor: Adrien Lèbre My subject! ;) Reliability and performance of execution
Virtualization. Jukka K. Nurminen 23.9.2015
Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,
Dynamic resource management for energy saving in the cloud computing environment
Dynamic resource management for energy saving in the cloud computing environment Liang-Teh Lee, Kang-Yuan Liu, and Hui-Yang Huang Department of Computer Science and Engineering, Tatung University, Taiwan
Deploying Business Virtual Appliances on Open Source Cloud Computing
International Journal of Computer Science and Telecommunications [Volume 3, Issue 4, April 2012] 26 ISSN 2047-3338 Deploying Business Virtual Appliances on Open Source Cloud Computing Tran Van Lang 1 and
Migration of Virtual Machines for Better Performance in Cloud Computing Environment
Migration of Virtual Machines for Better Performance in Cloud Computing Environment J.Sreekanth 1, B.Santhosh Kumar 2 PG Scholar, Dept. of CSE, G Pulla Reddy Engineering College, Kurnool, Andhra Pradesh,
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902
Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited
VIRTUALIZATION INTROSPECTION SYSTEM ON KVM-BASED CLOUD COMPUTING PLATFORMS. [email protected] Advisor: [email protected] Software Security Lab.
VIRTUALIZATION INTROSPECTION SYSTEM ON KVM-BASED CLOUD COMPUTING PLATFORMS [email protected] Advisor: [email protected] Software Security Lab. Motivation The era of cloud computing Motivation In the
Cloud Computing. Up until now
Cloud Computing Lecture 11 Virtualization 2011-2012 Up until now Introduction. Definition of Cloud Computing Grid Computing Content Distribution Networks Map Reduce Cycle-Sharing 1 Process Virtual Machines
Full and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
4-2 A Load Balancing System for Mitigating DDoS Attacks Using Live Migration of Virtual Machines
4-2 A Load Balancing System for Mitigating DDoS Attacks Using Live Migration of Virtual Machines ANDO Ruo, MIWA Shinsuke, KADOBAYASHI Youki, and SHINODA Yoichi Recently, rapid advances of CPU processor
StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud
StACC: St Andrews Cloud Computing Co laboratory A Performance Comparison of Clouds Amazon EC2 and Ubuntu Enterprise Cloud Jonathan S Ward StACC (pronounced like 'stack') is a research collaboration launched
Are Cache Attacks on Public Clouds Practical?
Are Cache Attacks on Public Clouds Practical? Thomas Eisenbarth Joint work with Gorka Irazoqui, Mehmet Sinan Inci, Berk Gulmezoglu and Berk Sunar WPI - 10/19/2015 Outline Cloud Computing and Isolation
Chapter 5 Cloud Resource Virtualization
Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers
CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies Lecture 4 Virtualization of Clusters and Data Centers Text Book: Distributed and Cloud Computing, by K. Hwang, G C. Fox, and J.J. Dongarra,
Automated deployment of virtualization-based research models of distributed computer systems
Automated deployment of virtualization-based research models of distributed computer systems Andrey Zenzinov Mechanics and mathematics department, Moscow State University Institute of mechanics, Moscow
LSM-based Secure System Monitoring Using Kernel Protection Schemes
LSM-based Secure System Monitoring Using Kernel Protection Schemes Takamasa Isohara, Keisuke Takemori, Yutaka Miyake KDDI R&D Laboratories Saitama, Japan {ta-isohara, takemori, miyake}@kddilabs.jp Ning
11.1 inspectit. 11.1. inspectit
11.1. inspectit Figure 11.1. Overview on the inspectit components [Siegl and Bouillet 2011] 11.1 inspectit The inspectit monitoring tool (website: http://www.inspectit.eu/) has been developed by NovaTec.
A Linux Kernel Auditing Tool for Host-Based Intrusion Detection
A Linux Kernel Auditing Tool for Host-Based Intrusion Detection William A. Maniatty, Adnan Baykal, Vikas Aggarwal, Joshua Brooks, Aleksandr Krymer and Samuel Maura [email protected] CSI 424/524, William
Virtual Switching Without a Hypervisor for a More Secure Cloud
ing Without a for a More Secure Cloud Xin Jin Princeton University Joint work with Eric Keller(UPenn) and Jennifer Rexford(Princeton) 1 Public Cloud Infrastructure Cloud providers offer computing resources
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR
GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR ANKIT KUMAR, SAVITA SHIWANI 1 M. Tech Scholar, Software Engineering, Suresh Gyan Vihar University, Rajasthan, India, Email:
The Reincarnation of Virtual Machines
The Reincarnation of Virtual Machines By Mendel Rosenblum Co-Founder of VMware Associate Professor, Computer Science Stanford University Abstract:VMware, Inc. has grown to be the industry leader in x86-based
VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing
Journal of Information & Computational Science 9: 5 (2012) 1273 1280 Available at http://www.joics.com VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing Yuan
Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
Cloud security CS642: Computer Security Professor Ristenpart h9p://www.cs.wisc.edu/~rist/ rist at cs dot wisc dot edu University of Wisconsin CS 642
Cloud security CS642: Computer Security Professor Ristenpart h9p://www.cs.wisc.edu/~rist/ rist at cs dot wisc dot edu University of Wisconsin CS 642 Announcements Take- home final versus in- class Homework
Securing Virtual Applications and Servers
White Paper Securing Virtual Applications and Servers Overview Security concerns are the most often cited obstacle to application virtualization and adoption of cloud-computing models. Merely replicating
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:
Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique
Performance Comparison of VMware and Xen Hypervisor on Guest OS
ISSN: 2393-8528 Contents lists available at www.ijicse.in International Journal of Innovative Computer Science & Engineering Volume 2 Issue 3; July-August-2015; Page No. 56-60 Performance Comparison of
Dynamic Load Balancing of Virtual Machines using QEMU-KVM
Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department
Two-Level Cooperation in Autonomic Cloud Resource Management
Two-Level Cooperation in Autonomic Cloud Resource Management Giang Son Tran, Laurent Broto, and Daniel Hagimont ENSEEIHT University of Toulouse, Toulouse, France Email: {giang.tran, laurent.broto, daniel.hagimont}@enseeiht.fr
A Hypervisor Based Security Testbed
A Hypervisor Based Security Testbed Dan Duchamp and Greg DeAngelis Computer Science Department Stevens Institute of Technology Hoboken, NJ 07030 USA [email protected], [email protected] ABSTRACT We
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Lecture and Presentation Topics (tentative) CS 7301: Recent Advances in Cloud Computing
Lecture and Presentation Topics (tentative) CS 7301: Recent Advances in Cloud Computing Cloud storage systems The rise of big data on cloud computing: Review and open research issues Consistency models
How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X
Performance Evaluation of Virtual Routers in Para-virtual Environment 1. Abhishek Bajaj [email protected] 2. Anargha Biswas [email protected] 3. Ambarish Kumar [email protected] 4.
Virtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
Models For Modeling and Measuring the Performance of a Xen Virtual Server
Measuring and Modeling the Performance of the Xen VMM Jie Lu, Lev Makhlis, Jianjiun Chen BMC Software Inc. Waltham, MA 2451 Server virtualization technology provides an alternative for server consolidation
Virtualization benefits in High Performance Computing Applications
Journal of Computer Science and Information Technology June 2014, Vol. 2, No. 2, pp. 101-109 ISSN: 2334-2366 (Print), 2334-2374 (Online) Copyright The Author(s). 2014. All Rights Reserved. Published by
Intrusion Detection in Virtual Machine Environments
Intrusion Detection in Virtual Machine Environments Marcos Laureano, Carlos Maziero, Edgard Jamhour Graduate Program in Applied Computer Science Pontifical Catholic University of Paraná - Brazil {laureano,
A Distributed Storage Architecture based on a Hybrid Cloud Deployment Model
A Distributed Storage Architecture based on a Hybrid Cloud Deployment Model Emigdio M. Hernandez-Ramirez, Victor J. Sosa-Sosa, Ivan Lopez-Arevalo Information Technology Laboratory Center of Research and
Efficient Data Management Support for Virtualized Service Providers
Efficient Data Management Support for Virtualized Service Providers Íñigo Goiri, Ferran Julià and Jordi Guitart Barcelona Supercomputing Center - Technical University of Catalonia Jordi Girona 31, 834
Trusted VM Snapshots in Untrusted Cloud Infrastructures
Trusted VM Snapshots in Untrusted Cloud Infrastructures Abhinav Srivastava 1, Himanshu Raj 2, Jonathon Giffin 3, Paul England 2 1 AT&T Labs Research 2 Microsoft Research 3 School of Computer Science, Georgia
Outline. Introduction. State-of-the-art Forensic Methods. Hardware-based Workload Forensics. Experimental Results. Summary. OS level Hypervisor level
Outline Introduction State-of-the-art Forensic Methods OS level Hypervisor level Hardware-based Workload Forensics Process Reconstruction Experimental Results Setup Result & Overhead Summary 1 Introduction
Distributed System Monitoring and Failure Diagnosis using Cooperative Virtual Backdoors
Distributed System Monitoring and Failure Diagnosis using Cooperative Virtual Backdoors Benoit Boissinot E.N.S Lyon directed by Christine Morin IRISA/INRIA Rennes Liviu Iftode Rutgers University Phenix
9/26/2011. What is Virtualization? What are the different types of virtualization.
CSE 501 Monday, September 26, 2011 Kevin Cleary [email protected] What is Virtualization? What are the different types of virtualization. Practical Uses Popular virtualization products Demo Question,
Secure In-VM Monitoring Using Hardware Virtualization
Secure In-VM Monitoring Using Hardware Virtualization Monirul Sharif Georgia Institute of Technology Atlanta, GA, USA [email protected] Wenke Lee Georgia Institute of Technology Atlanta, GA, USA [email protected]
A Migration of Virtual Machine to Remote System
ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor?
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor? Mr. Jacob Torrey February 26, 2014 Dartmouth College 153 Brooks Road, Rome, NY 315.336.3306 http://ainfosec.com @JacobTorrey
Avirtual machine monitor is a software system
COVER FEATURE Rethinking the Design of Virtual Machine Monitors To overcome the poor scalability and extensibility of traditional virtual machine monitors that partition a single physical machine into
DACSA: A Decoupled Architecture for Cloud Security Analysis
DACSA: A Decoupled Architecture for Cloud Security Analysis Jason Gionta, William Enck, Peng Ning NC State Ahmed Azab Samsung Ltd Xialoan Zhang Google Inc Cloud Provider Landscape Cloud Provider Landscape
Basics of Virtualisation
Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are
Xen and the Art of Virtualization
Xen and the Art of Virtualization Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauery, Ian Pratt, Andrew Warfield University of Cambridge Computer Laboratory, SOSP
Efficient Cloud Management for Parallel Data Processing In Private Cloud
2012 International Conference on Information and Network Technology (ICINT 2012) IPCSIT vol. 37 (2012) (2012) IACSIT Press, Singapore Efficient Cloud Management for Parallel Data Processing In Private
Full System Emulation:
Full System Emulation: Achieving Successful Automated Dynamic Analysis of Evasive Malware Christopher Kruegel Lastline, Inc. [email protected] 1 Introduction Automated malware analysis systems (or sandboxes)
THE EUCALYPTUS OPEN-SOURCE PRIVATE CLOUD
THE EUCALYPTUS OPEN-SOURCE PRIVATE CLOUD By Yohan Wadia ucalyptus is a Linux-based opensource software architecture that implements efficiencyenhancing private and hybrid clouds within an enterprise s
HEY, YOU, GET OFF OF MY CLOUD: EXPLORING INFORMATION LEAKAGE
HEY, YOU, GET OFF OF MY CLOUD: EXPLORING INFORMATION LEAKAGE IN THIRD-PARTY COMPUTE CLOUDS T. Ristenpart, H. Shacham, S. Savage UC San Diego E. Tromer MIT CPCS 722: Advanced Systems Seminar Ewa Syta GET
