Kernel Data Integrity Protection via Memory Access Control
|
|
|
- Joshua Lawrence
- 10 years ago
- Views:
Transcription
1 Kernel Data Integrity Protection via Memory Access Control Abhinav Srivastava Ikpeme Erete Jonathon Giffin School of Computer Science, Georgia Institute of Technology Abstract Operating system kernels isolate applications from other malicious software via protected memory created by virtual memory management. Even though modern kernels aggregate core kernel code with driver and module components of different provenance, kernel memory remains unified and without isolation. Kernel-level malicious software has full access to the data and operations of all kernel components. In this paper, we create kernel memory protection. We design an access control policy and enforcement system that prevents kernel components with low trust from altering security-critical data used by the kernel to manage its own execution. Our policies are at the granularity of kernel variables and structure elements, and they can protect data dynamically allocated at runtime. Our hypervisor-based design uses memory page protection bits as part of its policy enforcement; the granularity difference between page-level protection and variable-level policies challenges the system s ability to remain performant. We develop kernel data-layout partitioning and reorganization to maintain kernel performance in the presence of our protections. We show that our system prevents illegitimate alteration of securitycritical kernel data at a performance cost of 1 20%. By offering protection for critical kernel data, we guarantee that security utilities relying on the integrity of kernel-level state remain accurate. 1 Introduction Commodity monolithic operating system kernels in execution contain a collection of code and data attributable to the core kernel and each of its dynamically-loaded components, such as drivers and modules. In contrast to the isolated virtual memory regions created by the kernel for each user-level application, all kernel components share a unified address space that allows any component to access the data and code of others. The core kernel manages private and security-critical data objects, such as a task list, a loaded module or driver list, and running process permissions, that describe fundamental properties of the computer system s execution. Kernel data privacy is meant in intent only: the unified kernel memory space allows a malicious kernel component to alter the core kernel s private data at will. Widespread attacks regularly increase their potency by manipulating security-critical data in the kernel using a technique called direct kernel object manipulation (DKOM) [18, 36]. Rootkits often use this technique to hide the existence of malicious processes by eliding task structures for the processes from the kernel s process accounting list. Malicious kernel-level components can hide their own presence by illicitly removing data structures identifying their presence from a kernel-managed list of loaded drivers or modules. In another attack, they escalate process privileges by overwriting the process user credentials with those for a root or administrative user. Security software relying upon the core kernel s own management information will fail to identify the presence of malicious software. In many attack instances, security software that periodically verifies the consistency of core kernel data structures will be unable to locate any errors. In this work, we present a system called Sentry that creates access control protections for security-critical kernel data. We partition kernel memory into separate regions having different access control policies or 1
2 Normal Protected Sentry Time µ s Figure 1: Performance impact of Sentry s access control protections for the Linux task struct and module data structures. The graph shows context-switch time measured between two processes; smaller measurements are better. Normal measurements have no kernel memory protection, protected includes memory protection without performance optimizations, and Sentry includes both memory protection and optimization. restrictions detailing when the code of the core kernel and its loaded components can access data in a particular protected region. To avoid both direct and indirect attacks from kernel-level malware, our access control enforcement occurs within a hypervisor executing beneath the kernel running in a virtual machine (VM). The hypervisor mediates the execution of instructions attempting to write protected kernel data. Sentry enforces access control policies that specify what code regions of the kernel are allowed to write what data objects within kernel memory. To determine the origin of an attempted access to protected data, it examines the activation records on the kernel stack at every access. Our access control protections are general, start from boot, and encompass both statically and dynamically allocated kernel data objects. Providing access control to static data objects is straightforward because they remain unaltered during the execution of operating systems. As evasive attacks often circumvent detection by manipulating heap-managed kernel data objects, we focus primarily on protections for dynamicallyallocated data. In our design, the kernel communicates to the hypervisor the need for mediated access to a newly constructed data object at the time that it constructs the object. Memory access control provides security for kernel data at the cost of diminished runtime performance. Our hypervisor-based enforcement mechanism verifies memory accesses at the granularity of high-level language variables in the kernel s source code, including individual structure fields contained within a larger aggregate object. We implement mediated access at page-level granularity, the only granularity of memory protection offered by commodity hardware. This mismatch leads to inefficiencies in straightforward implementations of memory access mediation. A memory page containing even one protected variable would require the entire page s data to be protected. While Sentry can easily distinguish between protected and unprotected regions within a single page, it would still be invoked and introduce performance cost for accesses to all unprotected variables on the page. In the worst case, every access to every kernel data object may be mediated; we expect the costs to be extremely high. To address the performance challenge, we alter the kernel s data structure design and runtime memory layout. Rather than allowing protected and unprotected data to freely intermix on memory pages, we segregate data based on its access policy. We cluster together data having the same access policy on the same memory pages, and separate data with different policies onto different pages. In this design, Sentry will be invoked only for accesses to data requiring mediation, as all unprotected data will be placed on unprotected memory pages. To show the effect of improved memory layout, Figure 1 presents measurements of contextswitching time between two processes on a Linux system. It is clear that Sentry performs much better than the case where protection is offered to a kernel whose memory layout is not altered. Critical data protection provides a variety of system and software security techniques with assurances in the integrity of kernel state. A rootkit that attempts to hide itself or a process by manipulating kernel data will trigger an access violation in Sentry; if the rootkit chooses not to attempt the alteration, then traditional rootkit detectors that search for suspicious modules, drivers, and processes will locate the attack. Security software using virtual machine introspection (VMI) [12] to understand the security state of an untrusted 2
3 guest operating system can rely upon state information in the guest protected by Sentry. We developed Sentry using the Linux operating system and Xen hypervisor. To demonstrate the feasibility of our ideas, we altered the memory layout of two Linux kernel data structures: task struct and module. We chose these two data structures due to their complexity, widespread use in the kernel, and frequent alteration by DKOM rootkits. However, our framework is general and supports protection and memory layout optimization of any arbitrary kernel data structure. Our system is effective at preventing and detecting rootkits without incurring significant performance overhead. We tested Sentry with a collection of rootkits that use DKOM to make malicious modifications to kernel data. It successfully prevented all rootkits that attempted to alter kernel data illegitimately and allowed only legitimate changes performed by the core kernel. We also carried out experiments to test Sentry s performance using both micro and macro benchmarks. Sentry incurs 0 1% overhead on CPU and network bound processes, and it incurs 20% overhead on mixed loads. The results indicated that Sentry s design significantly improves system s performance when compared with the scenario where an unpartitioned kernel was protected (0 16% overhead on CPU and network bound processes and 45% overhead on mixed loads). The continuing evolution of hypervisors is following multiple paths and multiple philosophies. Our work exists in a vein that considers new functionalities, including security services, that may be provided from a hypervisor [13]. One view of this research direction is that hypervisors are gaining capabilities traditionally found in operating system kernels. The collection of components of varying provenance within a modern kernel s address space echo collections of applications running at a computer system s user-level. Sentry s memory protections then mirror the isolated virtual memory address spaces provided by the OS kernels for such processes. We believe that this work makes the following contributions: We create protected memory regions within the unified kernel data space. A kernel can then isolate its security-critical data from kernel components having low trust, creating assurance in the critical state. We protect dynamically-allocated kernel objects that are altered by illegitimate, untrusted kernel components. Sentry detects untrusted components that use DKOM to alter critical kernel data. We show how to optimize kernel memory space layout for the protection constraints created by our system. Our layout changes do not impact correctness of kernel execution, but they allow our access control enforcement to operate with higher performance. 2 Related Work Our primary contribution is integrity protection for kernel data. Previous studies have examined this problem and arrived at solutions different than our own. Petroni et al. [24] proposed a system that detects semantic integrity violations in kernel objects, such as a process task structure reachable when traversing a linked list used by the scheduler but not reachable when traversing the process accounting list. Baliga et al. [1] developed a similar system that hypothesizes invariants on kernel data structures based on observations of the kernel s execution. Periodic invariant verification attempts to discover the sort of data manipulation addressed by our work, but it has some limitations overcome by Sentry. These techniques succeed only when invariants can be stated for a data object. This is clearly possible for structures like a process, as the invariant can compare the reachable nodes along two different traversals: process accounting list and process scheduling list. It is not evident that invariants can be written for other structures, such as the list of loaded kernel modules, which do not offer multiple views. Sentry operates differently: it mediates all attempted data alterations and allows only those invoked by legitimate kernel functionality. While previous approaches detect malicious modifications, Sentry prevents the illegitimate changes of critical data from even occurring. 3
4 XFI [10] provides integrity protection for data (among other protections) via guarded write instructions in software components subject to access control policy constraints. XFI s use of inlined verification imposes restrictions on software development that may prove difficult to satisfy in actual deployments. For example, it restricts use of the x86 instruction set, requires code that can be statically analyzed, and requires buy-in from all kernel drivers and modules (including rootkit modules). In contrast, Sentry operates with only cooperation from the core, static kernel; dynamically-loaded components are unconstrained. The designs of XFI and Sentry also highlight differences between inlined monitoring and external protection. XFI guards all computed writes to ensure that no write kills a protected value. When many of these writes target unprotected addresses, performance still degrades. Sentry, in comparison, mediates writes only when they actually attempt modification of protected data. XFI s protections occur at the origin (the write instruction), whereas Sentry s protections occur at the destination (the security-critical data). Many kernel-level attacks hook a kernel s control flow by overwriting a code pointer stored in a static kernel data table or variable. On the Windows platform, PatchGuard [21] protects read-only structures containing code pointers, such as the system service descriptor table (SSDT) and interrupt descriptor table (IDT). Xu et al. [40] implemented a framework that controls access to read-only kernel data structures. Petroni and Hicks [25] detected illicit control-flow transfers arising from hooked code pointers. These systems succeed at the protection or detection of alterations made to read-only dispatch tables or known potential hooks. The goals of Sentry are more general as it provides more extensive protection by additionally mediating access to dynamically-allocated data structures. We partition kernel memory into protected and unprotected regions and partition data objects into matching components. The general concept of partitioning a single object into secure and insecure portions has appeared as a solution to other software security problems. Multics protection rings created different memory regions having different access permissions [7]. Mondrian [39], a fine-grained memory protection system, allows multiple protection domains to share and export services. Ta-min et al. [37] proposed a system that partitions the system-call interface into secure and insecure components. A hypervisor processes secure system calls while insecure calls are handled as usual. Payne et al. [28] split a single security application among multiple VMs and protected the components placed in any untrusted VM. Chong et al. [6] automatically partitioned web applications to ensure that the resulting placement of code and data are secure and efficient. Brumley et al. [3] automatically partitioned a single program into a privileged program that handles all the privileged operations, and an unprivileged program that does everything else. Unlike these examples of interface and application partitioning, where the goal is to offer better security, Sentry uses memory and object partitioning to provide protected kernels with better performance. Sentry enforces a memory access policy at the hypervisor level and executes the protected operating system within a virtual machine. Wide-ranging security applications use virtualization technology to help build attack-resistant computer systems. Uses include file system protection [43], exfiltration detection [35], rootkit detection [30], malware analysis [8, 22, 42], and other security services [9, 11, 19, 29, 31 33, 42]. Garfinkel et al. [12] proposed the idea of virtual machine introspection (VMI) to detect rootkits; subsequent VMI research developed more comprehensive intrusion detection systems either by inspecting into the VM s memory [16, 25, 35], disk [27], or other internal events such as system calls [15, 17]. VMI-based security solutions that read the runtime memory state of an untrusted guest system interpret that state to form conclusions about the security condition of the system. This interpretation introduces a trust inversion: it requires the high-privilege security software to trust the data read from the low-privilege guest VM. Sentry s data protections can provide assurance in the security-relevant information read by a VMI-based utility, which restores the legitimacy of their analysis. Our low-level memory access mediation uses memory page protection bits to cause deliberate faults when kernel code attempts to alter a protected variable. A hypervisor can provide protection to arbitrary memory pages allocated to a VM. Payne et al. [28] used memory protection to guard hooks inside the guest OS. SecVisor [34] protected static kernel code, though it provided no protection for kernel data. Chen et 4
5 asmlinkage int give root() { if(current->uid!= 0) { current->uid = 0; current->gid = 0; current->euid = 0; current->egid = 0; } return 0; } Figure 2: Fragment of rootkit code that elevates privileges of non-superuser processes to superuser (ID 0). int init_module() { task = find_task_by_pid(pid); if (task) { REMOVE LINKS(task); } } Figure 3: Fragment of rootkit code that removes a malicious process identified by pid from the process accounting linked list. al. [5] built a mechanism called multi-shadowing that presents different views of physical memory to protect the privacy and integrity of application data. Litty et al. [20] developed a system that neutralizes the ability of rootkits to hide executing binaries from the administrator by leveraging the non-executable bit of memory pages. Yang and Shin [41] used a hypervisor and cryptography to protect one application s memory contents from other applications in the system. Sentry uses similar manipulation of page protections to control and monitor memory use within a guest OS. 3 Overview A running kernel aggregates code and data from the core kernel and from a collection of dynamicallyloaded modules and drivers. These different components may engender varying levels of trust in regions of the kernel. We assume that the core kernel implemented by the system s kernel developers receives full trust. Many modules and drivers, however, have unknown provenance and hold only limited trust. Sentry monitors the interactions between code with low trust and critical data with high trust. The core kernel includes operations and data exported to modules for their use as well as internal functionality and objects meant to be managed only by the core kernel itself. For example, the list of loaded modules, the process accounting list, the scheduler list, and the run queue of the Linux kernel exist in the core kernel s data and heap space and are altered by internal functionality of the kernel. However, the lack of memory barriers between the core kernel and its dynamically-loaded components prevents the kernel from disallowing illicit alterations of its internal data structures by malicious modules or drivers. Consider as an example the Linux kernel s task management. The kernel stores per-process data, such as user IDs and group IDs that determine a process privilege level and allowed access, in an aggregate data structure called the task struct. Each task struct also contains references to the next and previous task structures and thus forms a node in a doubly-linked list. Security tools in the kernel or in a monitoring virtual machine find the set of running processes by traversing the doubly-linked list. Suppose that a malicious application wants to execute undesirable functionality as a high-privilege user while remaining elusive from security software that searches for unexpected processes. The application may include a kernel-mode rootkit that elevates the malicious process privilege by directly writing to its task struct and hides the process by altering the kernel s process accounting list. Figure 2 shows a C code snippet from a rootkit [26] that assigns superuser privileges to its malicious process. In the code shown in Figure 3, a rootkit [38] uses the kernel macro REMOVE LINKS to remove its malicious process task struct from the doubly-linked process accounting list. This macro alters the next and previous pointers within the list and allows the rootkit to hide its malicious process. 5
6 3.1 Threat Model We assume that a powerful adversary is able to execute malicious code both at the kernel and user level, and is able to alter kernel data structures at will. Kernel-level access can be achieved via kernel-level exploits or social engineering that entices a user to install a malicious kernel module, or by some other security flaws present in the system. This threat model reflects the current generation of rootkit attacks. Due to kernel-level attacks, our defense is implanted in a hypervisor to provide tamper resistance. Virtualized environments create a security virtual machine (VM) and multiple user VMs. Our threat model considers the hypervisor and security VM as part of the trusted computing base (TCB). User VMs are untrusted and are open to both direct and indirect attacks. We do assume that the core kernel code of a user VM is protected and cannot be subverted by any malware running in the user VM. This requirement can be realized by making the core kernel s code pages read-only [34]. 3.2 Kernel Data Integrity Model Potentially malicious kernel modules should not be able to alter security-critical data intended to be managed only by the core kernel. Sentry protects such data and enforces data access restrictions based upon the origin of the access within the code of the kernel and its modules or drivers. The data integrity model is straightforward and matches that of the Biba ring policy [2]: Trusted core kernel code may write to any kernel data. Loaded modules and drivers can write to any low-integrity data, which includes all driver data and portions of the core kernel data. Loaded modules and drivers cannot write to high-integrity data of the core kernel either through a direct write or by calling into existing code of the core kernel that will execute the write on behalf of the module, unless the control-flow transfer into the core kernel targets an exported function. Exported functions act as trusted upgraders. The intent of the kernel developers was to provide an API through which modules and drivers can legitimately make changes to critical data, and the changes leave the kernel in a consistent state. To determine if a loadable kernel component called into a core kernel function to induce alteration of highintegrity data, Sentry performs a stackwalk of the kernel stack to identify activation records and the call chain that led to an attempted write. Consider again the rootkit behaviors of Figures 2 and 3. When the rootkit attempts to directly modify the privileges of a process, Sentry will mediate the write to security-critical data of the core kernel. The malicious code that modifies privileges by directly writing to memory is in a loaded module and not in the core kernel code, so Sentry will prevent the write (and optionally alert the system s user or administrator of a likely malware infection). Should the macro expansion of the rootkit s attempted unlinking of the task struct from the linked list include function calls into internal list management functions of the kernel, then Sentry s stackwalk will step into an activation record for the code of the loaded module. Again, Sentry will deny the data write. 4 Kernel Memory Access Control We designed a hypervisor-based kernel data protection system, Sentry, that protects sensitive kernel data from unauthorized modification. Its design reflects these goals: 6
7 Security VM Sentry Controller USER SPACE KERNEL SPACE original data structure User VM insecure data structure secure data structure Policy Instruction emulator XEN Enforcer Execution history extractor Memory protector VMCALL Figure 4: Sentry s architecture. Hardware Resist direct attack: Sentry protects a kernel s dynamic data from rootkits. Since a rootkit executes in kernel space, any kernel-level tools can easily be subverted. In order to be tamper-resistant, Sentry s design uses a hypervisor to remain isolated from an untrusted kernel. Performance: Mediation of memory accesses may incur high performance cost if designed naïvely. To keep the overhead low, Sentry uses a novel memory partitioning based approach that lays out sensitive members on separate memory pages and protects those pages using the hypervisor. Sentry s architecture consists of three components: a hypervisor, a user virtual machine (VM), and a trusted security VM (Figure 4). The hypervisor is the heart of Sentry s protections, comprised of the memory access policy description, a memory protection module, and a policy enforcement module. The policy statement includes a description of trusted code regions allowed to modify secure data. The memory protection module protects all kernel memory pages containing security-critical variables. The policy enforcer mediates attempted writes to protected data and uses the policy to determine when writes should be permitted. If permitted, the enforcer s instruction emulator emulates the write operation in a manner transparent to the user VM. The enforcer includes functionality to extract execution history in the form of activation records present on the guest kernel s call stack. The user VM runs the operating system that may fall victim to kernel-level attacks. Finally, the security VM runs the controller software that starts and stops Sentry s protections and receives alerts from the hypervisor when a malicious modification is attempted. Sentry currently supports the Xen hypervisor and Linux guest kernels, and our continuing presentation of Sentry will use Xen and Linux terminology. Sentry s techniques are general and apply to many hypervisors and operating systems. 4.1 Policy Sentry enforces integrity protection policies based on the trust level of code attempting to alter critical data. Sentry s policies describe code regions or function call chains that are allowed to modify security-critical kernel data. Any access request that does not fall into pre-defined trusted code regions will be denied. We identify the following three types of code regions that can legitimately modify protected data: 1. All core kernel code (that not in loadable modules or drivers) is trusted to correctly manage its own private data. This code spans memory from the Linux kernel symbol text to etext. 2. Kernel code from init begin to init end contains code required for the kernel to successfully boot and is likewise trusted. 7
8 VMCALL Event Is this add protection request? Yes PFN exist in the protection list? No Add PFN to the list of protected pages Modify Xen's page table and remove write permission Flush TLBs to remove the old mappings for the protected page No No Is this remove protection request Yes Yes No PFN exist in the protection list? Yes Remove PFN from the list of protected pages Modify Xen's page table and add write permission Return control to guest Figure 5: Steps involved during addition and removal of memory page protections. 3. Alterations reachable from an exported function s entry point reflect valid management of private kernel data even when a loaded component calls into the function. Exported functions are deliberate APIs created by the core developers specifically for loadable modules and drivers; these functions leave an operating system s kernel in a consistent state. 4.2 Activation of Mediated Access To enforce integrity protections, Sentry must mediate all attempts to overwrite security-critical data. Sentry uses Xen s management of memory page access protections to disable write permissions on any page holding protected kernel data. This protection provides Sentry with the ability to interpose on write accesses to protected memory pages; any write operation to a protected page causes a page fault, and on each fault the hypervisor gains the control of execution. Sentry s memory protection relies on knowledge of the location of dynamically-allocated kernel objects. It uses code instrumentation within the user VM s kernel to activate and deactivate protections in concert with object construction and destruction. The instrumentation uses the Intel virtualization instruction VMCALL [14], which transfers control to the hypervisor when executed. Each VMCALL passes the page frame number (PFN) and virtual address of the newly allocated memory page requiring protection. We assume that existing techniques can protect the core kernel code s integrity [34], so an attacker will not be able to remove our instrumentation. Sentry prevents attackers from making a VMCALL to deactivate protection by verifying that the request was generated within the core kernel at an expected execution point. Sentry receives the VMCALL within the hypervisor and handles the request from the guest. When the memory protection module receives a request to add protection for a guest s page, it adds the PFN to a list of protected pages and removes the page s write permission. Sentry also flushes the translation lookaside buffer (TLB) cache to remove any previous mappings that may exist for the protected PFN. When a request to remove protection on a page comes to the memory protector, it removes the PFN of the page from the list of protected pages and restores write permission. Figure 5 presents a flowchart describing the entire process of adding and removing protection. 4.3 Policy Enforcement Sentry s policy enforcer also resides in the hypervisor, and its duty is to enforce the pre-defined security policies. Both legitimate and malicious writes to protected pages will cause a page fault received by Xen, providing opportunities for mediation. At each fault, the enforcer determines if it is due to Sentry s protection by verifying the PFN of the faulting page. If the PFN belongs to the list of protected PFNs, then it performs further actions, otherwise, it directs Xen to resume normal operation. On a page fault caused by Sentry s protection, the enforcer first uses the user VM s instruction pointer (eip) to know which code is 8
9 Page fault Execution enters the hypervisor Protected PFN? Yes Get EIP that tries to write on the protected page Is EIP in trusted code region? Yes Perform stackwalk and extract all return addresses Are return addresses in trusted code region? Yes Emulate the faulting instruction Update EIP and other guest context registers No No No Resume normal execution Attack Attack Return control to guest Figure 6: Steps used by Sentry to resolve a write fault on a protected page. directly attempting to write to the protected page. If the instruction pointer belongs to an untrusted code region, then the access must be denied. If the instruction pointer belongs to a trusted code region, then the enforcer must ensure that the trusted code was invoked legitimately. It extracts the execution history of the kernel associated with the memory write by executing a stackwalk of the user kernel s stack. When encountering a stack frame for a core kernel function with a return address pointing back to untrusted code, Sentry checks to see if the core function is part of the kernel s exported API. If not, then untrusted code invoked an unsafe call into the kernel, and the memory alteration must be prevented. Otherwise, Sentry allows the write operation. Once Sentry determines that a write is permitted by the integrity protection policy, it emulates the write in the same way that the guest system would have done had the protection been absent. When the emulation completes, Sentry updates the guest context registers so that the mediation of writes remains completely transparent to the guest. The complete flowchart of resolving a write page fault is shown in Figure Memory Layout Optimization The normal layout of kernel objects in memory challenges Sentry s ability to achieve the performance goal. Sentry s policies protect individual kernel variables and fields of aggregate structures, but it mediates writes at the granularity of complete memory pages. Structures, like task struct and module, contain a mix of security-critical and non-critical fields laid out contiguously in memory, thereby making it difficult to provide efficient and fine-grained protection. If any variable on a page is protected, the entire page suffers the overhead of mediated writes. We would prefer that only write operations to security-sensitive members invoke a fault, thereby eliminating the performance overhead resulting from faults generated by writes to non-critical data. To address this problem, we developed two approaches to partition a data structure into secure and insecure pieces. All secure structures should be located together on protected pages, and all insecure structures can reside on unprotected pages. Once we identify all sensitive members (described in Section 4.5) in a kernel data structure, say Obj, we partition it. The first approach, structure division, partitions Obj by creating a new data structure insecure Obj and moving all non-sensitive members into this new structure. This procedure leaves the original Obj structure with security-critical members only. For example, Figure 7 shows the partial division of the Linux task struct. A kernel using partitioned structures must allocate memory separately for the secure and insecure pieces to create different memory regions for protected and unprotected members. All existing kernel code must be updated to use partitioned structures. All members that belong to Obj can still be accessed as before, however, code that accesses non-critical members must access them through insecure Obj. To solve this problem, we first linked Obj and insecure Obj by adding a new pointer field in Obj called insecure that points to the insecure Obj structure. Second, we modified all affected kernel code by adding another level of indirection through the insecure pointer. For example, if code in the kernel was accessing the field journal info as current->journal info, it is modified to become current->insecure->journal info, where current points to the task struct of 9
10 struct task_struct uid_t uid, euid, suid gid_t gid, egid sgid u64 acct_rss_mem1 u64 acct_vm_mem1 struct list_head tasks void *journal_info unsigned long personality struct audit_context *audit_context struct task_struct uid_t uid, euid, suid gid_t gid, egid sgid struct list_head tasks char comm[16] struct insecure_task_struct *insecure_task struct insecure_task_struct unsigned long personality struct audit_context *audit_context Figure 7: Memory partitioning of the Linux task struct structure. char comm[16] void *journal_info.. u64 acct_vm_mem1.. u64 acct_rss_mem1 the currently executing process. Although structure division requires code revision, it is largely a one-time design cost borne by kernel developers offering long-term improvements to a kernel s security. Subsequent, unrelated kernel development may incur small maintenance costs as developers choose to shift variables to or from the protected region. A variation of the structure division approach keeps insecure members in Obj and moves secure members to a new data structure, say secure Obj. It links the two structures with a new pointer from Obj to secure Obj. This design creates a security flaw in the system. An attacker who wants to hide her malicious process can easily remove the link from Obj that points to secure Obj because Obj contains non-critical members and is not protected by Sentry. Once the pointer to secure Obj is removed, she can easily add a new pointer that points to her own secure Obj, which is not protected by Sentry. Our previously described partitioning approach that contains pointers from secure to insecure does not suffer from this problem. Since all secure members including the pointer to insecure members are protected, no such modification can occur. Our second partitioning approach, structure alignment, places protected and unprotected variables of a unified data structure on separate memory pages by aligning the structure in memory so that it lays across a page boundary. Only one of the two pages is protected by Sentry, and the structure s security-critical fields lay on that page. To accomplish the structure alignment, we group all security-critical members in Obj together at the start of the structure, group all non-critical fields at the end, and add a new alignment buffer to the structure between the two fields. This buffer is simply padding that forces all non-critical members onto the second, unprotected memory page. These partitioning strategies have trade-offs. Structure division provides freedom in laying out the protected and unprotected structures in memory at runtime, though its need for kernel source code revision adds complexity. Structure alignment has only a minimal source code change to the definition of the structure, and it could easily be integrated into an existing kernel by creating a compile-time option that inserts or removes the alignment buffer. However, it may not be an effective use of kernel memory, and it imposes constraints on runtime memory layout. We demonstrate in Section 5 that both designs are feasible, even on pervasive kernel data structures. 4.5 Identification of Security-Critical Members Our memory layout optimization design depends upon the identification of security-critical and non-critical members in a data structure. We define a member in a data structure to be security-critical if: It is manipulated by attackers to carry out malicious activities. This approach provided a reasonable idea of those data structure fields commonly modified by attackers and needing immediate protection. 10
11 For example, many Linux-based rootkits modify uid and gid fields in a task struct to elevate privileges of their malicious process. In another example, they hide the presence of their malicious modules by modifying the next and prev pointers of a module structure. Based on this notion, we collected rootkits to identify kernel variables that they alter. Subject-matter experts, such as core kernel developers, identify the variable as security critical. They can identify important members in a kernel data structure during a development phase before they are misused by attackers. Defensive systems rely on the variable s integrity in order to understand the security state of the user VM. For example, VMI applications [16] often rely on the integrity of process accounting lists and filesystem structures when constructing a trusted view of the system. We believe that these criteria offer sufficient ability to identify security-sensitive members present in kernel data structures. 4.6 Sentry Controller The security VM runs an application that controls Sentry s operation. The controller can enable or disable Sentry s protection inside the hypervisor. It interacts with the hypervisor using the hypercall interface provided by Xen. Whenever the hypervisor component of Sentry records an attack, it informs the controller so that necessary actions can be taken, as specified by the policy or by an administrator. 5 Implementation We developed Sentry using Linux guest operating systems and the Xen hypervisor version 3.2. We describe low-level implementation details of our system in this section. 5.1 Data Structure Layout We applied our partitioning techniques to Linux s task struct and module data structures. We chose these two data structures due to their complexity, their relevance to current kernel-level attacks, and their pervasiveness in the kernel. The process data structure is important to the kernel because it is the fundamental unit of execution, and its complexity is based on the fact that it contains 122 members. The module data structure has 29 members and is used when any driver or module is loaded or unloaded, and whenever any subsystem of the kernel implemented as a driver, such as a filesystem, disk, or network device, is accessed. To demonstrate the feasibility of different partitioning strategies, we partitioned each of the two structure types with different techniques. We applied structure division to the widely-used task struct (Section 5.1.1) and structure alignment to the module structure (Section 5.1.2) Partitioning of task struct To apply our partitioning strategy on the task struct structure, we first identify its security-critical members. As described in Section 4.5, we identified critical members by analyzing rootkits and the Linux kernel source code. We categorized 28 of 122 members as critical and chose those for protection and partitioning. We divided task struct into two parts: task struct containing security sensitive members, and insecure task struct containing all non-sensitive fields. Before partitioning, memory is allocated to an instance of the task struct via kmem cache alloc. This function allocates the requested memory bytes by retrieving them from a cache of available memory. 11
12 The retrieved memory size might cross page boundaries, consequently making it difficult to provide protection for only those members that need it. Hence, we changed the memory allocation to instead use get free pages, which allocates complete pages, and kmalloc. Using get free pages, we allocated each task struct on a complete page, thereby separating the critical members from non-critical fields in the kernel memory; we allocated each instance of insecure task struct using kmalloc. As described previously, we connect the two substructures by maintaining a reference from task struct to insecure task struct. We finally modified Linux s free task function to deallocate the memory pages allocated to task struct and insecure task struct separately. The first process created on a Linux system during boot, init task, is defined specially in the kernel code and requires its own partitioning. Its creation is different from other processes other processes are created and initialized using the kernel function do fork, which copies the current process parent s task struct into the current process memory and then modifies the necessary members, but init task has no parent process. The init task object is created separately by the kernel and initialized using the INIT TASK macro, a static initializer of the init task process. To apply our partitioning and protection on the init task object, we modified the existing INIT TASK macro and created a new macro called INIT UNSCR TASK. We removed the initialization of all unprotected members from INIT TASK and inserted them into the new macro. In this way, all security-critical members of init task are initialized using the INIT TASK macro, and the remaining non-critical members are initialized using INIT UNSCR TASK. After the structure partitioning, we modified the Linux kernel source code in order to work with the partitioned structure. As part of this process, we also updated driver code since some drivers also rely on the process structure for portions of their functionality Partitioning of module We partitioned Linux s module structure using our page alignment technique. We categorized 2 of the structure s 29 members as critical and separated them from the non-critical members with an alignment buffer that places the critical fields on a different page than non-critical fields at runtime. We first grouped all the security-critical members in the module data structure together by reorganizing the data structure. Our new alignment field provided padding that filled the rest of the memory page and caused the non-critical members to cross the page boundary to a new, unprotected page. This alternative partitioning technique did not require creation of a new insecure structure as done for the task struct. The approach required no source code modification beyond the addition of the alignment buffer and required only a straightforward recompilation of the kernel due to the changed field offsets within the module structure. 5.2 Access Mediation and Policy Enforcement The memory protection system of Sentry operates in two phases, both occurring concurrently: a management phase when the kernel adds or removes a page frame number (PFN) to or from the list of protected PFNs, and the mediation phase providing the actual memory protection. To perform addition, removal, and lookup operations on a PFN, we created new APIs inside the Xen hypervisor. The API includes the functions addpfntodb, removepfnfromdb, and checkpfnindb, providing Sentry with the ability to add, remove, and find a protected PFN, respectively. The second phase works by modifying the shadow page table (SPT) code of Xen. The SPT is the native page table used by the hardware and managed by Xen. To provide memory protection, we modify the sh propagate function. When a new memory page is added to the guest page table, sh propagate propagates this entry to the SPT to keep both the tables in sync. While propagating this update, the memory protection system of Sentry intercepts the update and checks whether the propagation involves a protected 12
13 page. If a page belongs to the list of protected pages, it removes the write permission from the page by setting the page write bit to zero in the SPT. To intercept subsequent write faults on a protected page, we modified the shadow page fault handler function called sh page fault. When a fault occurs, Sentry s code inside sh page fault verifies whether or not the fault is a result of its protection mechanism. The verification dictates how the fault is to be processed by Sentry. If the fault is due to some other activity in the guest, then Sentry ignores it and resumes the normal operation. Otherwise, Sentry looks into the fault to determine what code region is attempting to write to the protected page, as described in Section Instruction Emulation When the kernel memory access control policy permits a mediated write operation, Sentry must reproduce the effects of the operation in a guest operating system s memory. This capability resides in Sentry because the guest operating system cannot execute the write operation itself due to the write protection bit set on the faulted page. We implemented an instruction emulator inside Xen to perform the emulation of memory writes. With this emulator, Sentry can replay attempted writes subsequently when determined legitimate. To emulate an instruction, Sentry needs to first locate the instruction and then fetch it from guest memory. To achieve this, Sentry uses Xen s function called hvm copy from guest virt that reads and writes to arbitrary guest locations. When a faulting instruction is fetched, the emulator decodes and executes the instruction inside Xen. Depending upon the instruction type, the decoding process may identify source and destination operands. The emulator executes the instruction by reading and writing the memory locations directly from the hypervisor. To ensure transparency to the guest operating system, it updates all context registers including the instruction pointer to point to the next instruction. 5.4 Execution History Extraction The execution history extractor of Sentry performs a walk on the kernel stack of the guest operating system by mapping the guest kernel s stack pages into the hypervisor s memory and then traversing stack frames present on the pages. Performing a walk on the kernel stack is challenging due to the unstructured layout of call stacks on x86 processors and the presence of interrupt stack frames on a kernel stack. To successfully extract stack frames, we complied our guest kernel with a compile-time option that produces kernel code maintaining a stack frame base pointer (ebp register) throughout the kernel s execution. Fortunately, interrupt stack frames do not pose problems for Sentry because Sentry only performs stackwalks following a page fault that occurred due to its protection; it does not perform a stackwalk on the kernel stack at arbitrary points of the kernel s execution. Sentry additionally pauses the guest kernel while executing the stackwalk to ensure that no modifications occur while reading the guest s memory state. When a page fault occurs, the extractor finds the location of the current stack frame from the ebp register. It subsequently determines the location of the return address by adding 4 bytes to the value of ebp. To get the actual return address, the extractor reads the value present on the computed return address location. To get the previous frame, it extracts the address stored at the location pointed to by the ebp register s contents. The extractor repeats this process until it reaches the end of the stackwalk. 6 Evaluation In this section, we evaluate both the attack detection capability and the performance of Sentry. We also perform detailed analysis of false positives associated with our system. 13
14 Name Hidden Hidden Privilege Result Process Module Escalation hp Detected all-root Detected kbd-version2 Detected kbd-version3 Detected override Detected synapsys Detected rkit Detected lvtes Detected adore-ng Detected Table 1: Sentry s attack detection results against Linux based rootkits that modify the kernel s process and module data structures. 6.1 Security Sentry prevents unauthorized modification of security-critical kernel data structures. Note that Sentry is able to protect both static and dynamic kernel data structures. However, our experiments focus on rootkit identification based upon their attempted alteration of dynamic kernel data, as this represents a significant new advance in defensive capabilities. We tested Sentry against a collection of DKOM rootkits present in the wild. Table 1 shows our Linux rootkit samples and the malicious behaviors that they exhibit. During our testing, we ran each rootkit sample in the user VM, which was running our Linux kernel with partitioned task struct and module data structures. Sentry s protections started at kernel boot so that all processes beginning with the init process and all modules can be protected. Below, we provide a detailed description of Sentry s detection of the lvtes keylogger and the all-root rootkit. The lvtes keylogger [4] hides its malicious module by removing it from the doubly-linked module list. It uses the kernel macro list del, which directly deletes a module from the list by modifying a member called list in the module structure. This member stores next and previous pointers of the list. To test Sentry against lvtes, we inserted the keylogger module in the user VM s kernel. When the rootkit tried manipulating list.next, and list.prev to remove the module from the doubly-linked list, it caused a page fault because list is considered to be a sensitive member and is protected by Sentry. The enforcement mechanism of Sentry verifies whether the access should be allowed or denied by checking which code attempted to modify protected members. Sentry looks at the instruction pointer, which in this case belongs to an untrusted code region in a module. Consequently, Sentry denies the access to the rootkit. The all-root rootkit [26] directly modifies the uid, gid, euid, and egid members of a task struct structure. These members determine the privilege level of a process, which in turn restricts the types of execution that a process can perform. When the rootkit loads, it hooks into the system-call table and replaces the function pointer associated with the getuid system call with its malicious function mal getuid. When a malicious process cooperating with the rootkit executes getuid, the rootkit s mal getuid function will execute instead and will set the uid, gid, euid, egid of the process to 0. This escalates the privileges of the invoking process to a superuser. We tested this attack with Sentry, and Sentry detected this modification because these fields were protected. Modifications to these fields caused a page fault, and Sentry found that the modifying code was in a module and denied the access. We tested Sentry with other rootkits as well, and our results, shown in Table 1, indicate that Sentry provided a 100% detection rate for DKOM rootkits. 14
15 Operations Normal VM (µs) Protected (µs) Overhead (µs) Sentry (µs) Overhead (µs) Process fork+exit Process fork+execve Process fork+/bin/sh -c Context switch Table 2: Process creation and context-switch time measured with lmbench; smaller measurements are better. The CPU time-slice on the system was 100 ms. 6.2 Performance Sentry is designed to provide both security and performance. It provides security by protecting kernel dynamic data, and it partitions kernel data structures to achieve usable performance. We carried out several experiments to measure the performance overhead incurred by the protection offered by Sentry as well as to show the effect of partitioning. Our premise is that if the same protection is offered to both a partitioned and an unpartitioned kernel, the partitioned kernel outperforms the latter. To determine the efficacy of Sentry, we tested its performance with both micro and macro benchmarks. Our results were measured on an Intel Core 2 Quad 2.66 GHz system with Fedora 8 in the security VM and our partitioned Linux kernel in the user VM. We assigned 1GB of memory to the user VM and 3GB total to the security VM and Xen. Unless otherwise specified, we performed the experiments reported in tables five times and present the median value. We also show the results of some experiments using boxplots due to measurement variations common to virtualized environments. In all the experiments, normal refers to measurements that have no kernel memory protection, protected includes memory protection without partitioning, and Sentry includes both memory protection and partitioning. In our micro benchmark experiments, we first measured the effect of protection and partitioning of a process data structure. We measured the effect with two tests that exercise heavy legitimate use of the protected structures process creation and context switch time using the lmbench 1 Linux benchmark tool. Lmbench performs three different experiments to measure the cost of process creation. Our results, shown in Table 2, indicate that Sentry incurs low overhead on a partitioned kernel as compared to the overhead on an unpartitioned kernel. In another micro benchmark experiment, we evaluated the effect of protection and partitioning of module structures on Linux modules by measuring the time taken by module load (insmod) and unload (rmmod) operations. We wrote a sample module (that traverses the list of loaded modules) and inserted it using the insmod program. The same module is then unloaded using the rmmod program. Figures 8 and 9 present the results, and it can be seen that the loading and unloading time is higher for the unpartitioned kernel when compared with Sentry s time. In the next set of experiments, we measured the effect of Sentry on filesystem cache performance. To test this, we used a filesystem benchmark called IOzone 2, which measures the throughput of read, write, re-read and re-write operations. Figure 10 shows quartile measurements of ten repetitions of each test. Note that these results show use of the kernel s filesystem cache rather than of disk operations due to the instability of disk performance measurements in a virtualized environment. Sentry s protection incurs less overhead on file cache operations when performed using the partitioned kernel as compared to its effect on unpartitioned kernel, and in many cases, its performance nears that of the unprotected system. To measure Sentry s performance on real world software, we tested it against full applications. In our first two experiments, we compiled a Linux kernel and Apache source code in the user VM. Source code compilation invokes many gcc processes and disk reads & writes, providing a mixed load to measure Sen
16 Time ms Time ms Normal Protected Sentry 0 Normal Protected Sentry Figure 8: Module loading operation performed via insmod; smaller measurements are better. Figure 9: Module unloading operation performed via rmmod; smaller measurements are better. Throughput MBs Normal Protected Sentry Normal Protected Sentry Normal Protected Sentry Normal Protected Sentry (a) Read (b) Re-read (c) Write (d) Re-write Figure 10: Performance impact of kernel memory protection on use of the kernel s file cache. All measurements show throughput in MB/s; higher measurements are better. Boxes show medians and first and third quartiles. Outliers appear as dashes. Groupings show performance of (a) file-cache cold reads, (b) cachewarm reads, (c) cache-cold writes, and (d) cache-warm writes. try s performance. Table 3 shows the results of mixed loads, and it is evident that Sentry s partitioning has reduced the overhead caused by its protection by 50% when compared with the overhead on an unpartitioned kernel. The next two experiments measure Sentry s performance on CPU-bound applications. We performed these experiments with the Linux compression and decompression utilities bzip2 and bunzip2. We performed the compression experiment by providing a 225 MB tar file to bzip2 for compression. Later, we performed the reverse operation by passing the 40 MB compressed file to bunzip2 to decompress. The results show that Sentry s overall effect on CPU-bound applications is negligible; a system with this protection is certainly usable. Our last experiment measures Sentry s effect on network processes by testing it against network loads. We copied a file of size 174 MB over the physical network using a thttpd daemon accessed by the wget command. The results, shown in Table 3, demonstrate that Sentry s overhead is negligible for network workloads. The above results provide a strong evidence that Sentry can provide a balance between security and performance. Its performance on several applications is encouraging and suggests it may be suitable in real world environments. 16
17 Operations Normal VM (sec) Protected (sec) % Overhead Sentry (sec) % Overhead Kernel Compilation (make) Apache Compilation (make) Compression (bzip2) Decompression (bunzip2) Network File Transfer (wget) Table 3: Performance impact of Sentry on real-world applications; smaller measurements are better. 6.3 Potential Performance Improvements Although the previous section shows the overhead of Sentry on real world applications to be low, its performance can further be improved by providing efficient ways to do common operations. We have identified the following improvement opportunities: New memory allocator. When using structure division partitioning, Sentry allocates a security-critical structure on a protected page using the kernel function get free page, which returns a new page from the kernel memory pool. Since the security-critical portion of each data structure is allocated on a new page, this allocation wastes system memory because these pages are not used further. An improved memory allocator can better utilize a protected page by allocating more secure structures on the same page until the page gets filled. When one page gets filled, the kernel allocates a new page for secure structures. This approach provides two advantages: first, it reduces the TLB flushing that happens inside Xen when Sentry adds a new page to the list of protected pages. Second, this approach reduces the number of VMCALLs from the guest to Xen requesting protection on a new page because the page where a new structure will be located may already be protected. The downside of reusing a page is that it leads to increased page faults during the initialization of a new structure. Mapping memory pages. Sentry uses the memory mapping and unmapping functions of Xen, such as hvm copy to guest virt and hvm copy from guest virt during the process of stackwalk and instruction emulation. These functions access the guest VM s memory from the hypervisor, however, Xen provides a very inefficient implementation. On each invocation of these functions, Xen maps a requested page from the guest kernel s memory into the hypervisor memory, performs the operation, and then unmaps the page. An improvement can be made in the implementation of these functions by keeping a memory page mapped inside the hypervisor memory to provide locality of reference. For example, during a stackwalk, Sentry maps and unmaps a stack page everytime when it accesses a return address and frame pointer. If the page were mapped once during a single walk, then Xen would have avoided multiple mapping and unmapping operations. 6.4 False Positive Analysis A false positive in our approach occurs when a security-critical member is modified by a benign module or driver that violates our integrity policies described in Section 4.1. Our analysis revealed the following: There were no instances when security-critical kernel data protected by Sentry is directly modified by a benign driver. Hence, any such attempts must be detected as attacks. Whenever security-critical data protected by Sentry was altered by a benign driver, it was done using exported functions designed by kernel developers for that purpose, and they left the kernel in a consistent state. We illustrate this point with an example: a task struct contains a member 17
18 called run list, which is similar to tasks (pointer to accounting list), but contains next and previous pointers for scheduling list; Sentry protects the run list member. These pointers are modified by functions such as enqueue task and dequeue task, which in turn are called from the schedule function, which is exported by the kernel. The schedule function is invoked on each context-switch and modifies the run list; it is also invoked from all modules. Since our policy allows changes made to kernel data via exported functions as they act as trust upgraders, whenever members such as run list were modified, we verified the call-chain and allowed the modification. With the above design in place, our system did not show any false positives and detected all attacks. 7 Discussion Sentry is designed to protect security-critical kernel data. It partitions a data structure into secure and insecure structures and protects the secure structure using the hypervisor. In this section, we discuss some of the possible ways in which an attacker may try to bypass our system to manipulate secure data. We also comment on possible ways to overcome the limitations of Sentry, and on its other applications. Stack Frame Farming Attack. An attacker cannot bypass our protection system as any write operation on a protected page invokes Sentry. However, a motivated attacker who wants to bypass Sentry can launch a stack frame farming attack. In this attack, an attacker creates fake frames on the kernel stack and jumps into the kernel code to modify protected data. During a stackwalk, Sentry concludes that the request has originated from trusted code regions because of fake frames. Since a trusted code region is allowed to make modifications to security-critical kernel data, the access would be granted. Though this kind of attack is possible, it is challenging to launch on a kernel stack. A kernel stack is more complicated than a process stack as it keeps both function call frames and interrupt frames, and any corruption of an interrupt frame may crash the system. Windows Operating System Support. Sentry requires the source code of an operating system in order to partition a structure into secure and insecure parts. This kind of protection is difficult to design for a closed source operating system such as Windows. However, the solution presented in this paper to protect kernel dynamic data can be adopted by Windows developers by creating a partitioned kernel during an OS development cycle to support such protection mechanisms. Automatic Partitioning. Our two proposed partitioning strategies require modifications to an existing kernel. We currently perform this task manually. Though manual partitioning is difficult to perform, it demonstrates the feasibility of our ideas. However, in the future, automated techniques and tools such as source-to-source transformations performed by CIL [23] would be helpful. Automatic Data Access Profile Extraction. Our framework monitors data accesses and checks whether the accessed data is modified legitimately or not. This feature can be used to extract data access profiles for any kernel data structures and their members. The extracted information can be used to create fine-grained policies specific to monitored data. 8 Conclusions We developed Sentry to provide partitioned kernel memory in a manner similar to memory isolation provided by the kernel for its applications. We protect security-critical data by protecting memory pages containing that data. To provide balance between security and performance, we altered kernel memory layouts to aggregate data needing the same policy enforcement on the same memory pages. Sentry s security evaluation shows that the system is capable of detecting attacks against dynamic kernel data, and its performance evaluation shows low overheads on microbenchmarks and real-world applications. 18
19 References [1] A. Baliga, V. Ganapathy, and L. Iftode. Automatic inference and enforcement of kernel data structures invariants. In 24 th Annual Computer Security Applications Conference (ACSAC), Anaheim, CA, Dec [2] K. J. Biba. Integrity considerations for secure computer systems. Technical Report MTR-3153, Mitre, Apr [3] D. Brumley and D. Song. Privtrans: Automatically partitioning programs for privilege separation. In 13 th USENIX Security Symposium, San Diego, CA, Aug [4] A. Chakrabarti. An introduction to Linux kernel backdoors. lvtes.txt. Last accessed Jan. 15, [5] X. Chen, T. Garfinkel, E. C. Lewis, P. Subrahmanyam, C. A. Waldspurger, D. Boneh, J. Dwoskin, and D. R. Ports. Overshadow: A virtualization-based approach to retrofitting protection in commodity operating systems. In 13 th International Conference on Architectural Support for Programming Languages and Operating Systems, Seattle, WA, Mar [6] S. Chong, J. Liu, A. C. Myers, X. Qi, K. Vikram, L. Zheng, and X. Zheng. Secure web applications via automatic partitioning. In 21 st ACM Symposium on Operating Systems Principles (SOSP), Stevenson, WA, Oct [7] F. Corbato and V. Vyssotsky. Introduction and overview of the Multics system. In Proceedings of Fall Joint Computer Conference, Las Vegas, NV, Nov [8] A. Dinaburg, P. Royal, M. Sharif, and W. Lee. Ether: Malware analysis via hardware virtualization extensions. In 15th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, Oct [9] G. W. Dunlap, S. T. King, S. Cinar, M. A. Basrai, and P. M. Chen. ReVirt: Enabling intrusion analysis through virtualmachine logging and replay. In Operating Systems Design and Implementation (OSDI), Boston, MA, Dec [10] Ú. Erlingsson, M. Abadi, M. Vrable, M. Budiu, and G. C. Necula. XFI: Software guards for system address spaces. In Symposium on Operating System Design and Implementation (OSDI), Seattle, WA, nov [11] T. Garfinkel, B. Pfaff, J. Chow, M. Rosenblum, and D. Boneh. Terra: A virtual machine-based platform for trusted computing. In ACM Symposium on Operating Systems Principles (SOSP), Bolton Landing, NY, Oct [12] T. Garfinkel and M. Rosenblum. A virtual machine introspection based architecture for intrusion detection. In Network and Distributed System Security Symposium (NDSS), San Diego, CA, Feb [13] S. Hand, A. Warfield, K. Fraser, E. Kotsovinos, and D. Magenheimer. Are virtual machine monitors microkernels done right? In Proceedings of 10th Workshop on Hot Topics in Operating Systems, Santa Fe, NM, June [14] Intel. System Programming Guide: Part 2. Intel 64 and IA-32 Architectures Software Developer s Manual, [15] X. Jiang and X. Wang. Out-of-the-box monitoring of VM-based high-interaction honeypots. In 10 th International Symposium on Recent Advances in Intrusion Detection (RAID), Surfers Paradise, Australia, Sept [16] X. Jiang, X. Wang, and D. Xu. Stealthy malware detection through VMM-based out-of-the-box semantic view. In 14 th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, Nov [17] S. T. Jones, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau. VMM-based hidden process detection and identification using Lycosid. In ACM Conference on Virtual Execution Environments (VEE), Seattle, WA, Mar [18] Kimmo Kasslin. Hide n seek revisited full stealth is back. KimmoKasslin VB2005 proceedings.pdf. Last accessed Jan. 15, [19] S. T. King and P. M. Chen. Backtracking intrusions. In ACM Symposium on Operating System Principles (SOSP), Bolton Landing, New York, Oct [20] L. Litty, H. A. Lagar-Cavilla, and D. Lie. Hypervisor support for identifying covertly executing binaries. In 17 th USENIX Security Symposium, San Jose, CA, Aug [21] Microsoft. PatchGuard aspx. Last accessed Feb. 4,
20 [22] A. Moser, C. Kruegel, and E. Kirda. Exploring multiple execution paths for malware analysis. In IEEE Symposium on Security and Privacy, Oakland, CA, May [23] G. C. Necula, S. McPeak, S. Rahul, and W. Weimer. CIL: Intermediate language and tools for analysis and transformation of C programs. In Proceedings of Conference on Compiler Construction (CC), Grenoble, France, Apr [24] J. Nick L. Petroni, T. Fraser, A. Walters, and W. A. Arbaugh. An architecture for specication-based detection of semantic integrity violations in kernel dynamic data. In 15 th USENIX Security Symposium, Vancouver, BC, Canada, Aug [25] J. Nick L. Petroni and M. Hicks. Automated detection of persistent kernel control-flow attacks. In 14 th ACM Conference on Computer and Communications Security (CCS), Alexandria, VA, Nov [26] Packet Storm. All-root. c. Last accessed Jan. 15, [27] B. D. Payne, M. Carbone, and W. Lee. Secure and flexible monitoring of virtual machines. In 23 rd Annual Computer Security Applications Conference (ACSAC), Miami, FL, Dec [28] B. D. Payne, M. Carbone, M. Sharif, and W. Lee. Lares: An architecture for secure active monitoring using virtualization. In IEEE Symposium on Security and Privacy, Oakland, CA, May [29] N. Provos. A virtual honeypot framework. In 13th USENIX Security Symposium, San Diego, CA, Aug [30] N. A. Quynh and Y. Takefuji. Towards a tamper-resistant kernel rootkit detector. In ACM Symposium on Applied Computing (SAC), Seoul, Korea, Mar [31] R. Riley, X. Jiang, and D. Xu. Guest-transparent prevention of kernel rootkits with VMM-based memory shadowing. In 11th International Symposium on Recent Advances in Intrusion Detection (RAID), Boston, MA, Sept [32] N. Rosenblum, G. Cooksey, and B. Miller. Virtual machine-provided context sensitive page mappings. In Virtual Execution Environments (VEE), Seattle, WA, Mar [33] R. Sailer, T. Jaeger, E. Valdez, R. Caceres, R. Perez, S. Berger, J. L. Griffin, and L. van Doorn. Building a MAC-based security architecture for the Xen open-source hypervisor. In Annual Computer Security Applications Conference (ACSAC), Tucson, AZ, Dec [34] A. Seshadri, M. Luk, N. Qu, and A. Perrig. SecVisor: A tiny hypervisor to provide lifetime kernel code integrity for commodity OSes. In ACM Symposium on Operating Systems Principles (SOSP), Stevenson, WA, Oct [35] A. Srivastava and J. Giffin. Tamper-resistant, application-aware blocking of malicious network connections. In 11th International Symposium on Recent Advances in Intrusion Detection (RAID), Boston, MA, Sept [36] Symantec. Windows rootkit overview. rootkit.overview.pdf. Last accessed Jan. 15, [37] R. Ta-Min, L. Litty, and D. Lie. Splitting interfaces: Making trust between applications and operating systems congurable. In Symposium on Operating System Design and Implementation (OSDI), Seattle, WA, Oct [38] ubra. Process hiding and the Linux scheduler. Phrack, 11(63), Jan [39] E. Witchel, J. Cates, and K. Asanovic. Mondrian memory protection. In Proceedings of International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), San Jose, CA, Oct [40] M. Xu, X. Jiang, R. Sandhu, and X. Zhang. Towards a VMM-based usage control framework for OS kernel integrity protection. In 12th ACM Symposium on Access Control Models and Technologies (SACMAT), Sophia Antipolis, France, June [41] J. Yang and K. Shin. Using hypervisor to provide application data secrecy on a per-page basis. In Virtual Execution Environments (VEE), Seattle, WA, Mar [42] H. Yin, D. Song, M. Egele, C. Kruegel, and E. Kirda. Panorama: Capturing system-wide information flow for malware detection and analysis. In ACM Conference on Computer and Communications Security (CCS), Arlington, VA, Oct [43] X. Zhao, K. Borders, and A. Prakash. Towards protecting sensitive files in a compromised system. In 3 rd IEEE Security in Storage Workshop,
Practical Protection of Kernel Integrity for Commodity OS from Untrusted Extensions
Practical Protection of Kernel Integrity for Commodity OS from Untrusted Extensions Xi Xiong The Pennsylvania State University [email protected] Donghai Tian The Pennsylvania State University Beijing
Lares: An Architecture for Secure Active Monitoring Using Virtualization
Lares: An Architecture for Secure Active Monitoring Using Virtualization Bryan D. Payne Martim Carbone Monirul Sharif Wenke Lee School of Computer Science Georgia Institute of Technology Atlanta, Georgia
Secure In-VM Monitoring Using Hardware Virtualization
Secure In-VM Monitoring Using Hardware Virtualization Monirul Sharif Georgia Institute of Technology Atlanta, GA, USA [email protected] Wenke Lee Georgia Institute of Technology Atlanta, GA, USA [email protected]
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor?
Intel s Virtualization Extensions (VT-x) So you want to build a hypervisor? Mr. Jacob Torrey February 26, 2014 Dartmouth College 153 Brooks Road, Rome, NY 315.336.3306 http://ainfosec.com @JacobTorrey
Soft-Timer Driven Transient Kernel Control Flow Attacks and Defense
Soft-Timer Driven Transient Kernel Control Flow Attacks and Defense Jinpeng Wei, Bryan D. Payne, Jonathon Giffin, Calton Pu Georgia Institute of Technology Annual Computer Security Applications Conference
Exploiting the x86 Architecture to Derive Virtual Machine State Information
2010 Fourth International Conference on Emerging Security Information, Systems and Technologies Exploiting the x86 Architecture to Derive Virtual Machine State Information Jonas Pfoh, Christian Schneider,
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
Full and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
The Microsoft Windows Hypervisor High Level Architecture
The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its
Chapter 5 Cloud Resource Virtualization
Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.
Migration of Process Credentials
C H A P T E R - 5 Migration of Process Credentials 5.1 Introduction 5.2 The Process Identifier 5.3 The Mechanism 5.4 Concluding Remarks 100 CHAPTER 5 Migration of Process Credentials 5.1 Introduction Every
Securely Isolating Malicious OS Kernel Modules Using Hardware Virtualization Support
Journal of Computational Information Systems 9: 13 (2013) 5403 5410 Available at http://www.jofcis.com Securely Isolating Malicious OS Kernel Modules Using Hardware Virtualization Support Zhixian CHEN
Tamper-Resistant, Application-Aware Blocking of Malicious Network Connections
Tamper-Resistant, Application-Aware Blocking of Malicious Network Connections Abhinav Srivastava and Jonathon Giffin School of Computer Science Georgia Institute of Technology Attacks Victim System Bot
I Control Your Code Attack Vectors Through the Eyes of Software-based Fault Isolation. Mathias Payer, ETH Zurich
I Control Your Code Attack Vectors Through the Eyes of Software-based Fault Isolation Mathias Payer, ETH Zurich Motivation Applications often vulnerable to security exploits Solution: restrict application
Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
Security Overview of the Integrity Virtual Machines Architecture
Security Overview of the Integrity Virtual Machines Architecture Introduction... 2 Integrity Virtual Machines Architecture... 2 Virtual Machine Host System... 2 Virtual Machine Control... 2 Scheduling
Run-Time Deep Virtual Machine Introspection & Its Applications
Run-Time Deep Virtual Machine Introspection & Its Applications Jennia Hizver Computer Science Department Stony Brook University, NY, USA Tzi-cker Chiueh Cloud Computing Center Industrial Technology Research
Code Injection From the Hypervisor: Removing the need for in-guest agents. Matt Conover & Tzi-cker Chiueh Core Research Group, Symantec Research Labs
Code Injection From the Hypervisor: Removing the need for in-guest agents Matt Conover & Tzi-cker Chiueh Core Research Group, Symantec Research Labs SADE: SteAlthy Deployment and Execution Introduction
Hypervisor-Based, Hardware-Assisted System Monitoring
Horst Görtz Institute for IT-Security, Chair for System Security VMRay GmbH Hypervisor-Based, Hardware-Assisted System Monitoring VB2013 October 2-4, 2013 Berlin Carsten Willems, Ralf Hund, Thorsten Holz
Virtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
A Hypervisor IPS based on Hardware assisted Virtualization Technology
A Hypervisor IPS based on Hardware assisted Virtualization Technology 1. Introduction Junichi Murakami ([email protected]) Fourteenforty Research Institute, Inc. Recently malware has become more
Virtualization for Cloud Computing
Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources
VMkit A lightweight hypervisor library for Barrelfish
Masters Thesis VMkit A lightweight hypervisor library for Barrelfish by Raffaele Sandrini Due date 2 September 2009 Advisors: Simon Peter, Andrew Baumann, and Timothy Roscoe ETH Zurich, Systems Group Department
Isolating Commodity Hosted Hypervisors with HyperLock
Isolating Commodity Hosted Hypervisors with HyperLock Zhi Wang Chiachih Wu Michael Grace Xuxian Jiang Department of Computer Science North Carolina State University {zhi wang, cwu10, mcgrace}@ncsu.edu
Toasterkit - A NetBSD Rootkit. Anthony Martinez Thomas Bowen http://mrtheplague.net/toasterkit/
Toasterkit - A NetBSD Rootkit Anthony Martinez Thomas Bowen http://mrtheplague.net/toasterkit/ Toasterkit - A NetBSD Rootkit 1. Who we are 2. What is NetBSD? Why NetBSD? 3. Rootkits on NetBSD 4. Architectural
Full System Emulation:
Full System Emulation: Achieving Successful Automated Dynamic Analysis of Evasive Malware Christopher Kruegel Lastline, Inc. [email protected] 1 Introduction Automated malware analysis systems (or sandboxes)
Operating System Components
Lecture Overview Operating system software introduction OS components OS services OS structure Operating Systems - April 24, 2001 Operating System Components Process management Memory management Secondary
Patterns for Secure Boot and Secure Storage in Computer Systems
Patterns for Secure Boot and Secure Storage in Computer Systems Hans Löhr, Ahmad-Reza Sadeghi, Marcel Winandy Horst Görtz Institute for IT Security, Ruhr-University Bochum, Germany {hans.loehr,ahmad.sadeghi,marcel.winandy}@trust.rub.de
CS161: Operating Systems
CS161: Operating Systems Matt Welsh [email protected] Lecture 2: OS Structure and System Calls February 6, 2007 1 Lecture Overview Protection Boundaries and Privilege Levels What makes the kernel different
A Survey on Virtual Machine Security
A Survey on Virtual Machine Security Jenni Susan Reuben Helsinki University of Technology [email protected] Abstract Virtualization plays a major role in helping the organizations to reduce the operational
Virtual Machine Security
Virtual Machine Security CSE497b - Spring 2007 Introduction Computer and Network Security Professor Jaeger www.cse.psu.edu/~tjaeger/cse497b-s07/ 1 Operating System Quandary Q: What is the primary goal
Access Control Fundamentals
C H A P T E R 2 Access Control Fundamentals An access enforcement mechanism authorizes requests (e.g., system calls) from multiple subjects (e.g., users, processes, etc.) to perform operations (e.g., read,,
CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study
CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what
Windows Server Virtualization & The Windows Hypervisor
Windows Server Virtualization & The Windows Hypervisor Brandon Baker Lead Security Engineer Windows Kernel Team Microsoft Corporation Agenda - Windows Server Virtualization (WSV) Why a hypervisor? Quick
x86 ISA Modifications to support Virtual Machines
x86 ISA Modifications to support Virtual Machines Douglas Beal Ashish Kumar Gupta CSE 548 Project Outline of the talk Review of Virtual Machines What complicates Virtualization Technique for Virtualization
Virtual machines and operating systems
V i r t u a l m a c h i n e s a n d o p e r a t i n g s y s t e m s Virtual machines and operating systems Krzysztof Lichota [email protected] A g e n d a Virtual machines and operating systems interactions
Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture
Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts
System Virtual Machines
System Virtual Machines Introduction Key concepts Resource virtualization processors memory I/O devices Performance issues Applications 1 Introduction System virtual machine capable of supporting multiple
Cloud Computing. Up until now
Cloud Computing Lecture 11 Virtualization 2011-2012 Up until now Introduction. Definition of Cloud Computing Grid Computing Content Distribution Networks Map Reduce Cycle-Sharing 1 Process Virtual Machines
Microkernels, virtualization, exokernels. Tutorial 1 CSC469
Microkernels, virtualization, exokernels Tutorial 1 CSC469 Monolithic kernel vs Microkernel Monolithic OS kernel Application VFS System call User mode What was the main idea? What were the problems? IPC,
COS 318: Operating Systems
COS 318: Operating Systems OS Structures and System Calls Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Outline Protection mechanisms
Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:
Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique
Compromise-as-a-Service
ERNW GmbH Carl-Bosch-Str. 4 D-69115 Heidelberg 3/31/14 Compromise-as-a-Service Our PleAZURE Felix Wilhelm & Matthias Luft {fwilhelm, mluft}@ernw.de ERNW GmbH Carl-Bosch-Str. 4 D-69115 Heidelberg Agenda
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around
FRONT FLYLEAF PAGE. This page has been intentionally left blank
FRONT FLYLEAF PAGE This page has been intentionally left blank Abstract The research performed under this publication will combine virtualization technology with current kernel debugging techniques to
COS 318: Operating Systems. Virtual Memory and Address Translation
COS 318: Operating Systems Virtual Memory and Address Translation Today s Topics Midterm Results Virtual Memory Virtualization Protection Address Translation Base and bound Segmentation Paging Translation
10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details
Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting
Survey On Hypervisors
Survey On Hypervisors Naveed Alam School Of Informatics and Computing Indiana University Bloomington [email protected] ABSTRACT Virtual machines are increasing in popularity and are being widely adopted.
KVM Security Comparison
atsec information security corporation 9130 Jollyville Road, Suite 260 Austin, TX 78759 Tel: 512-349-7525 Fax: 512-349-7933 www.atsec.com KVM Security Comparison a t s e c i n f o r m a t i o n s e c u
Virtual Machines. COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361
s COMP 3361: Operating Systems I Winter 2015 http://www.cs.du.edu/3361 1 Virtualization! Create illusion of multiple machines on the same physical hardware! Single computer hosts multiple virtual machines
Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:
Virtual Machines Uses for Virtual Machines Virtual machine technology, often just called virtualization, makes one computer behave as several computers by sharing the resources of a single computer between
Trusted VM Snapshots in Untrusted Cloud Infrastructures
Trusted VM Snapshots in Untrusted Cloud Infrastructures Abhinav Srivastava 1, Himanshu Raj 2, Jonathon Giffin 3, Paul England 2 1 AT&T Labs Research 2 Microsoft Research 3 School of Computer Science, Georgia
How To Stop A Malicious Process From Running On A Hypervisor
Hypervisor-Based Systems for Malware Detection and Prevention Yoshihiro Oyama ( 大 山 恵 弘 ) The University of Electro-Communications ( 電 気 通 信 大 学 ), Tokyo, Japan This Talk I introduce two hypervisor-based
Abstract. 1. Introduction. 2. Threat Model
Beyond Ring-3: Fine Grained Application Sandboxing Ravi Sahita ([email protected]), Divya Kolar ([email protected]) Communication Technology Lab. Intel Corporation Abstract In the recent years
LSM-based Secure System Monitoring Using Kernel Protection Schemes
LSM-based Secure System Monitoring Using Kernel Protection Schemes Takamasa Isohara, Keisuke Takemori, Yutaka Miyake KDDI R&D Laboratories Saitama, Japan {ta-isohara, takemori, miyake}@kddilabs.jp Ning
Virtual Servers. Virtual machines. Virtualization. Design of IBM s VM. Virtual machine systems can give everyone the OS (and hardware) that they want.
Virtual machines Virtual machine systems can give everyone the OS (and hardware) that they want. IBM s VM provided an exact copy of the hardware to the user. Virtual Servers Virtual machines are very widespread.
Virtualization. Pradipta De [email protected]
Virtualization Pradipta De [email protected] Today s Topic Virtualization Basics System Virtualization Techniques CSE506: Ext Filesystem 2 Virtualization? A virtual machine (VM) is an emulation
KVM: Kernel-based Virtualization Driver
KVM: Kernel-based Virtualization Driver White Paper Overview The current interest in virtualization has led to the creation of several different hypervisors. Most of these, however, predate hardware-assisted
Hypervisors and Virtual Machines
Hypervisors and Virtual Machines Implementation Insights on the x86 Architecture DON REVELLE Don is a performance engineer and Linux systems/kernel programmer, specializing in high-volume UNIX, Web, virtualization,
matasano Hardware Virtualization Rootkits Dino A. Dai Zovi
Hardware Virtualization Rootkits Dino A. Dai Zovi Agenda Introductions Virtualization (Software and Hardware) Intel VT-x (aka Vanderpool ) VM Rootkits Implementing a VT-x based Rootkit Detecting Hardware-VM
How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself
How do Users and Processes interact with the Operating System? Users interact indirectly through a collection of system programs that make up the operating system interface. The interface could be: A GUI,
Virtualization. Explain how today s virtualization movement is actually a reinvention
Virtualization Learning Objectives Explain how today s virtualization movement is actually a reinvention of the past. Explain how virtualization works. Discuss the technical challenges to virtualization.
Virtualization is set to become a key requirement
Xen, the virtual machine monitor The art of virtualization Moshe Bar Virtualization is set to become a key requirement for every server in the data center. This trend is a direct consequence of an industrywide
OSes. Arvind Seshadri Mark Luk Ning Qu Adrian Perrig SOSP2007. CyLab of CMU. SecVisor: A Tiny Hypervisor to Provide
SecVisor: A Seshadri Mark Luk Ning Qu CyLab of CMU SOSP2007 Outline Introduction Assumption SVM Background Design Problems Implementation Kernel Porting Evaluation Limitation Introducion Why? Only approved
Hardware Based Virtualization Technologies. Elsie Wahlig [email protected] Platform Software Architect
Hardware Based Virtualization Technologies Elsie Wahlig [email protected] Platform Software Architect Outline What is Virtualization? Evolution of Virtualization AMD Virtualization AMD s IO Virtualization
CIS 551 / TCOM 401 Computer and Network Security
CIS 551 / TCOM 401 Computer and Network Security Spring 2007 Lecture 3 1/18/07 CIS/TCOM 551 1 Announcements Email project groups to Jeff (vaughan2 AT seas.upenn.edu) by Jan. 25 Start your projects early!
Optimizing Network Virtualization in Xen
Optimizing Network Virtualization in Xen Aravind Menon EPFL, Lausanne [email protected] Alan L. Cox Rice University, Houston [email protected] Willy Zwaenepoel EPFL, Lausanne [email protected]
Virtualization. Types of Interfaces
Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity
Ensuring Operating System Kernel Integrity with OSck
Ensuring Operating System Kernel Integrity with OSck OwenS. Hofmann Alan M. Dunn SangmanKim Indrajit Roy Emmett Witchel The University of Texas at Austin HP Labs {osh,adunn,sangmank,witchel}@cs.utexas.edu
Survey on virtual machine security
Survey on virtual machine security Bright Prabahar P Post Graduate Scholar Karunya university Bijolin Edwin E Assistant professor Karunya university Abstract Virtualization takes a major role in cloud
Transparent Monitoring of a Process Self in a Virtual Environment
Transparent Monitoring of a Process Self in a Virtual Environment PhD Lunchtime Seminar Università di Pisa 24 Giugno 2008 Outline Background Process Self Attacks Against the Self Dynamic and Static Analysis
Kernel Virtual Machine
Kernel Virtual Machine Shashank Rachamalla Indian Institute of Technology Dept. of Computer Science November 24, 2011 Abstract KVM(Kernel-based Virtual Machine) is a full virtualization solution for x86
KVM: A Hypervisor for All Seasons. Avi Kivity [email protected]
KVM: A Hypervisor for All Seasons Avi Kivity [email protected] November 2007 Virtualization Simulation of computer system in software Components Processor: register state, instructions, exceptions Memory
CS 695 Topics in Virtualization and Cloud Computing. More Introduction + Processor Virtualization
CS 695 Topics in Virtualization and Cloud Computing More Introduction + Processor Virtualization (source for all images: Virtual Machines: Versatile Platforms for Systems and Processes Morgan Kaufmann;
How To Understand The Power Of A Virtual Machine Monitor (Vm) In A Linux Computer System (Or A Virtualized Computer)
KVM - The kernel-based virtual machine Timo Hirt [email protected] 13. Februar 2010 Abstract Virtualization has been introduced in the 1960s, when computing systems were large and expensive to operate. It
Operating System Overview. Otto J. Anshus
Operating System Overview Otto J. Anshus A Typical Computer CPU... CPU Memory Chipset I/O bus ROM Keyboard Network A Typical Computer System CPU. CPU Memory Application(s) Operating System ROM OS Apps
Secure cloud access system using JAR ABSTRACT:
Secure cloud access system using JAR ABSTRACT: Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that
Control your corner of the cloud.
Chapter 1 of 5 Control your corner of the cloud. From the halls of government to the high-rise towers of the corporate world, forward-looking organizations are recognizing the potential of cloud computing
A Linux Kernel Auditing Tool for Host-Based Intrusion Detection
A Linux Kernel Auditing Tool for Host-Based Intrusion Detection William A. Maniatty, Adnan Baykal, Vikas Aggarwal, Joshua Brooks, Aleksandr Krymer and Samuel Maura [email protected] CSI 424/524, William
Digital Rights Management Demonstrator
Digital Rights Management Demonstrator Requirements, Analysis, and Design Authors: Andre Osterhues, Marko Wolf Institute: Ruhr-University Bochum Date: March 2, 2007 Abstract: This document describes a
Leveraging Thin Hypervisors for Security on Embedded Systems
Leveraging Thin Hypervisors for Security on Embedded Systems Christian Gehrmann A part of Swedish ICT What is virtualization? Separation of a resource or request for a service from the underlying physical
PFP Technology White Paper
PFP Technology White Paper Summary PFP Cybersecurity solution is an intrusion detection solution based on observing tiny patterns on the processor power consumption. PFP is capable of detecting intrusions
Basics of Virtualisation
Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are
Detecting the Presence of Virtual Machines Using the Local Data Table
Detecting the Presence of Virtual Machines Using the Local Data Table Abstract Danny Quist {[email protected]} Val Smith {[email protected]} Offensive Computing http://www.offensivecomputing.net/
Xen and the Art of. Virtualization. Ian Pratt
Xen and the Art of Virtualization Ian Pratt Keir Fraser, Steve Hand, Christian Limpach, Dan Magenheimer (HP), Mike Wray (HP), R Neugebauer (Intel), M Williamson (Intel) Computer Laboratory Outline Virtualization
6.828 Operating System Engineering: Fall 2003. Quiz II Solutions THIS IS AN OPEN BOOK, OPEN NOTES QUIZ.
Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.828 Operating System Engineering: Fall 2003 Quiz II Solutions All problems are open-ended questions. In
International Journal of Enterprise Computing and Business Systems ISSN (Online) : 2230-8849
WINDOWS-BASED APPLICATION AWARE NETWORK INTERCEPTOR Ms. Shalvi Dave [1], Mr. Jimit Mahadevia [2], Prof. Bhushan Trivedi [3] [1] Asst.Prof., MCA Department, IITE, Ahmedabad, INDIA [2] Chief Architect, Elitecore
VICE Catch the hookers! (Plus new rootkit techniques) Jamie Butler Greg Hoglund
VICE Catch the hookers! (Plus new rootkit techniques) Jamie Butler Greg Hoglund Agenda Introduction to Rootkits Where to Hook VICE detection Direct Kernel Object Manipulation (DKOM) No hooking required!
ProMoX: A Protocol Stack Monitoring Framework
ProMoX: A Protocol Stack Monitoring Framework Elias Weingärtner, Christoph Terwelp, Klaus Wehrle Distributed Systems Group Chair of Computer Science IV RWTH Aachen University http://ds.cs.rwth-aachen.de
CIS 551 / TCOM 401 Computer and Network Security. Spring 2005 Lecture 4
CIS 551 / TCOM 401 Computer and Network Security Spring 2005 Lecture 4 Access Control: The Big Picture Objects - resources being protected E.g. files, devices, etc. Subjects - active entities E.g. processes,
