Cloud security CS642: Computer Security Professor Ristenpart h9p://www.cs.wisc.edu/~rist/ rist at cs dot wisc dot edu University of Wisconsin CS 642



Similar documents
Cloud security CS642: Computer Security Professor Ristenpart h9p:// rist at cs dot wisc dot edu University of Wisconsin CS 642

Hey, You, Get Off of My Cloud! Exploring Information Leakage in Third-Party Clouds. Thomas Ristenpart, Eran Tromer, Hovav Shacham, Stefan Savage

Evripidis Paraskevas (ECE Dept. UMD) 04/09/2014

HEY, YOU, GET OFF OF MY CLOUD: EXPLORING INFORMATION LEAKAGE

Cloud computing security

Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor s Expense)

Amazon EC2 XenApp Scalability Analysis

Chao He (A paper written under the guidance of Prof.

Hey, You, Get Off of My Cloud: Exploring Information Leakage in Third-Party Compute Clouds

Enabling Technologies for Distributed and Cloud Computing

Are Cache Attacks on Public Clouds Practical?

Enabling Technologies for Distributed Computing

Amazon Elastic Compute Cloud Getting Started Guide. My experience

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

A Measurement Study on Co-residence Threat inside the Cloud

THE DEFINITIVE GUIDE FOR AWS CLOUD EC2 FAMILIES

Computer Science. About PaaS Security. Donghoon Kim Henry E. Schaffer Mladen A. Vouk

Virtualization and Cloud Computing. The Threat of Covert Channels. Related Work. Zhenyu Wu, Zhang Xu, and Haining Wang 1

Technology and Cost Considerations for Cloud Deployment: Amazon Elastic Compute Cloud (EC2) Case Study

Cornell University Center for Advanced Computing

Virtualization for Cloud Computing

Network performance in virtual infrastructures

IOS110. Virtualization 5/27/2014 1

Security and Privacy in Public Clouds. David Lie Department of Electrical and Computer Engineering University of Toronto

UBUNTU DISK IO BENCHMARK TEST RESULTS

DISTRIBUTED SYSTEMS [COMP9243] Lecture 9a: Cloud Computing WHAT IS CLOUD COMPUTING? 2

Building a Private Cloud Cloud Infrastructure Using Opensource

CSE543 Computer and Network Security Module: Cloud Computing

Cornell University Center for Advanced Computing

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

Data Centers and Cloud Computing

9/26/2011. What is Virtualization? What are the different types of virtualization.

MySQL and Virtualization Guide

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Achieving QoS in Server Virtualization

Full and Para Virtualization

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida

Cloud Performance Benchmark Series

Satish Mohan. Head Engineering. AMD Developer Conference, Bangalore

Cloud Performance Benchmark Series

Virtual Switching Without a Hypervisor for a More Secure Cloud

Confinement Problem. The confinement problem Isolating entities. Example Problem. Server balances bank accounts for clients Server security issues:

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

Cloud Computing and E-Commerce

Chapter 14 Virtual Machines

Muse Server Sizing. 18 June Document Version Muse

Introduction to Cloud Computing

How To Create A Cloud Based System For Aaas (Networking)

Enterprise-Class Virtualization with Open Source Technologies

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

DataCenter optimization for Cloud Computing

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

Cloud Computing and Amazon Web Services

Amazon Web Services vs. Horizon

Small is Better: Avoiding Latency Traps in Virtualized DataCenters

Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7

Alfresco Enterprise on Azure: Reference Architecture. September 2014

Rally Installation Guide

Aneka Dynamic Provisioning

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Virtualization. Dr. Yingwu Zhu

Performance test report

Basics of Virtualisation

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

GreenSQL AWS Deployment

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY

This presentation covers virtual application shared services supplied with IBM Workload Deployer version 3.1.

Requirements Document for ROI Print Assessment/Print Manager

Private Distributed Cloud Deployment in a Limited Networking Environment

Options in Open Source Virtualization and Cloud Computing. Andrew Hadinyoto Republic Polytechnic

Unified Monitoring Portal Online Help List Viewer

March 10 th 2011, OSG All Hands Workshop Network Performance Jason Zurawski, Internet2. Performance Measurement Tools

Cloud Computing. Adam Barker

Stephen Coty Director, Threat Research

Running R from Amazon's Elastic Compute Cloud

How To Test Cloud Stack On A Microsoft Powerbook 2.5 (Amd64) On A Linux Computer (Amd86) On An Ubuntu) Or Windows Xp (Amd66) On Windows Xp (Amd65

Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS

Top virtualization security risks and how to prevent them

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

How To Use Arcgis For Free On A Gdb (For A Gis Server) For A Small Business

Use of Hadoop File System for Nuclear Physics Analyses in STAR

Privacy, Security and Cloud

Smartronix Inc. Cloud Assured Services Commercial Price List

Cloud Computing Security Master Seminar, Summer 2011

Comparing Free Virtualization Products

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

FortiGate Amazon Machine Image (AMI) Selection Guide for Amazon EC2

Cloud Computing on Amazon's EC2

Distributed Systems. Virtualization. Paul Krzyzanowski

Fingerprinting Large Data Sets through Memory De-duplication Technique in Virtual Machines

Estimating the Cost of a GIS in the Amazon Cloud. An Esri White Paper August 2012

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization and the U2 Databases

Intro to Virtualization

Performance tuning Xen

Performance measurement of a private Cloud in the OpenCirrus Testbed

Transcription:

Cloud security CS642: Computer Security Professor Ristenpart h9p://www.cs.wisc.edu/~rist/ rist at cs dot wisc dot edu University of Wisconsin CS 642

Announcements No office hour today Extra TA office hour tomorrow (10-11am or 3-4pm?) No class Wednesday Homework 3 due Wednesday Homework 4 later this week Project presentasons Dec 10 and 12 Take- home final handed out Dec 12 Due one week later

Cloud compusng NIST: Cloud computing is a model for enabling convenient, ondemand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud providers Infrastructure- as- a- service PlaYorm- as- a- service SoVware- as- a- service

A simplified model of public cloud compusng Users run Virtual Machines (VMs) on cloud provider s infrastructure User A virtual machines (VMs) Owned/operated by cloud provider User B virtual machines (VMs) Mul$tenancy (users share physical resources) Virtual Machine Manager (VMM) manages physical server resources for VMs To the VM should look like dedicated server Virtual Machine Manager

Trust models in public cloud compusng User A User B Users must trust third- party provider to not spy on running VMs / data secure infrastructure from external a9ackers secure infrastructure from internal a9ackers

Trust models in public cloud compusng User A Bad User guy B Users must trust third- party provider to not spy on running VMs / data secure infrastructure from external a9ackers secure infrastructure from internal a9ackers Threats due to sharing of physical infrastructure? Your business compestor Script kiddies Criminals

A new threat model: User A Bad guy A9acker idensfies one or more vicsms VMs in cloud 1) Achieve advantageous placement via launching of VM instances 2) Launch a9acks using physical proximity Exploit VMM vulnerability DoS Side- channel a9ack

1 or more targets in the cloud and we want to a9ack them from same physical host Launch lots of instances (over Sme), with each a9empsng an a9ack Can a9ackers do be9er?

Outline of a more damaging approach: 1) Cloud cartography map internal infrastructure of cloud map used to locate targets in cloud 2) Checking for co- residence check that VM is on same server as target - network- based co- residence checks - efficacy confirmed by covert channels 3) Achieving co- residence brute forcing placement instance flooding aver target launches Placement vulnerability: a9ackers can knowingly achieve co- residence with target 4) LocaSon- based a9acks side- channels, DoS, escape- from- VM

Case study with Amazon s EC2 1) given no insider informason 2) restricted by (the spirit of) Amazon s acceptable use policy (AUP) (using only Amazon s customer APIs and very restricted network probing) We were able to: Cloud cartography Pick target(s) Choose launch parameters for malicious VMs Each VM checks for co- residence Cross- VM side channel a9acks to spy on vicsm s computasonal load Secret data Frequently achieve advantageous placement

Some info about EC2 service (at Sme of study) Linux- based VMs available Uses Xen- based VM manager launch parameters User account 3 availability zones (Zone 1, Zone 2, Zone 3) 5 instance types (various combinasons of virtualized resources) Type gigs of RAM EC2 Compute Units (ECU) m1.small (default) 1.7 1 m1.large 7.5 4 m1.xlarge 15 8 c1.medium 1.7 5 c1.xlarge 7 20 1 ECU = 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor Limit of 20 instances at a Sme per account. EssenSally unlimited accounts with credit card.

(Simplified) EC2 instance networking External domain name External DNS Internal DNS External domain name or IP Internal IP External IP Edge routers Internal IP Dom0 Our experiments indicate that internal IPs are sta$cally assigned to physical servers Co- residence checking via Dom0: only hop on traceroute to co- resident target IP address shows up in traceroutes Xen VMM

Cloud cartography Pick target(s) Choose launch parameters for malicious VMs launch parameters 3 availability zones (Zone 1, Zone 2, Zone 3) 5 instance types (m1.small, c1.medium, m1.large, m1.xlarge, c1.xlarge) User account

Cloud cartography Pick target(s) Choose launch parameters for malicious VMs launch parameters 3 availability zones (Zone 1, Zone 2, Zone 3) 5 instance types (m1.small, c1.medium, m1.large, m1.xlarge, c1.xlarge) User account

Internal IP address mod 256 Associate to each /24 an essmate of Availability zone and Instance Type External IP DNS Internal IP /24 Availability zone Instance Type Mapping 6,577 public HTTP servers running on EC2 (Fall 2008) Internal IP address

Achieving co- residence Brute- forcing co- residence Experiment: A9acker launches many VMs over a relasvely long period of Sme in target s zone and of target type 1,686 public HTTP servers as stand- in targets running m1.small and in Zone 3 (via our map) 1,785 a9acker instances launched over 18 days Each checked co- residence against all targets using Dom0 IP Results: SequenSal placement locality lowers success 78 unique Dom0 IPs 141 / 1,686 (8.4%) had a9acker co- resident Lower bound on true success rate

Achieving co- residence Instance flooding near target launch abuses parallel placement locality Launch many instances in parallel near Sme of target launch

Achieving co- residence Instance flooding near target launch abuses parallel placement locality Launch many instances in parallel near Sme of target launch Experiment: Repeat for 10 trials: 1) Launch 1 target VM (Account A) 2) 5 minutes later, launch 20 a9ack VMs (alternate using Account B or C) 3) Determine if any co- resident with target using Dom0 IP 4 / 10 trials succeeded

Achieving co- residence Instance flooding near target launch abuses parallel placement locality How long is parallel placement locality good for? Experiment: 40 target VMs (across two accounts) 20 a9ack VMs launched hourly

Achieving co- residence Instance flooding near target launch abuses parallel placement locality What about commercial accounts? Free demos of Internet appliances powered by EC2 2 a9empts 1 st coresident w/ 40 VMs 2 nd 2 VMs coresident w/ 40 launched Several a9empts 1 st coresident w/ 40 VMs Subsequent a9empts failed

Checking for co- residence How do we know Dom0 IP is valid co- residence check? Use simple covert channel as ground truth: VM1 Hard disk VM2 Dom0 IP0 IP1 IP2 IP3 IP4 IP5 IP6 IP7 IP8 Sender transmits 1 by franscly reading random locasons Receiver Smes reading of a fixed locason Sender transmits 0 by doing nothing Covert channels require control of both VMs: we use only to verify network- based co- residence check

Checking for co- residence Experiment Repeat 3 Smes: Dom0 check works 1) 20 m1.small Account A 2) 20 m1.small Account B 3) All pairs w/ matching Dom0 à send 5- bit message across HD covert channel Ended up with 31 pairs of co- resident instances as indicated by Dom0 IPs Result: a correctly- received message sent for every pair of instances During experiment also performed pings to: * 2 control instances in each zone * co- resident VM RTT Smes also indicate co- residence Zone 1 Control 1 Zone 1 Control 2 Zone 2 Control 1 Zone 2 Control 2 Zone 3 Control 1 Zone 3 Control 2 Co- resident VM Median RTT (ms) 1.164 1.027 1.113 1.187 0.550 0.436 0.242

So far we were able to: Cloud cartography Pick target(s) Choose launch parameters for malicious VMs Each VM checks for co- residence P1 P2 P1 P2 OS1 Drivers OS2 Drivers Frequently achieve advantageous placement Hypervisor Hardware This shouldn t ma9er if VMM provides good isolason!

ViolaSng isolason Hard drive covert channel used to validate Dom0 co- residence check already violated isolason DegradaSon- of- Service a9acks Guests might maliciously contend for resources Xen scheduler vulnerability Escape- from- VM vulnerabilises Side- channel a9acks P1 P2 P1 P2 OS1 OS2 Hypervisor Hardware

Cross- VM side channels using CPU cache contenson A9acker VM VicSm VM Main memory CPU data cache 1) Read in a large array (fill CPU cache with a9acker data) 2) Busy loop (allow vicsm to run) 3) Measure Sme to read large array (the load measurement)

Cache- based cross- VM load measurement on EC2 Running Apache server Repeated HTTP get requests Performs cache load measurements 3 pairs of instances, 2 pairs co- resident and 1 not 100 cache load measurements during HTTP gets (1024 byte page) and with no HTTP gets Instances co- resident Instances co- resident Instances NOT co- resident

Cache- based load measurement of traffic rates on EC2 Running Apache server Varying rates of web traffic Performs cache load measurements 3 trials with 1 pair of co- resident instances: 1000 cache load measurements during 0, 50, 100, or 200 HTTP gets (3 Mbyte page) per minute for ~1.5 mins

Performance Loss from ContenSon 600 VM Performance DegradaSon (%) 500 400 300 200 100 0 CPU Net Disk Cache Machine CPU VM LLC Size Local Xen Testbed Intel Xeon E5430, 2.66 Ghz 2 packages each with 2 cores 6MB per package 28

Resource Freeing A9acks (RFAs) Goal: Reduce performance loss from contenson IntuiSon: Performance suffers from contenson for a target resource Introducing new workload on a vicsm can shiv their usage away from target

Ingredients for a successful RFA Shi. resource away from the target resource Create a bo0leneck on an essensal Dynamic resource CGI Pages used by the vicsm without affecsng ShiV resource the usage via public interface beneficiary ProporSon of CPU usage StaSc Pages Freeing up the target resource ProporSon of Network usage Clients Net

Example RFA: Network bandwidth Vic7m runs Apache webserver hossng stasc and dynamic content (CGI pages) Beneficiary also runs Apache webserver hossng stasc content Contending for network bandwidth Clients Net

Example RFA: Network bandwidth Helper sends CPU- intensive CGI requests Creates CPU bo0leneck on vicsm Frees up bandwidth Increasing beneficiary s share of bandwidth from 50 to 85% CPU USlizaSon Clients Net CGI Request Helper

Example RFA: Cache contenson Vic7m runs Apache webserver hossng stasc and dynamic content (CGI pages) Beneficiary runs Apache cache- sensisve workload Contending for cache $$$ Clients Net

Example RFA: Cache contenson 1 No-RFA 320 640 Normalized Performance 0.95 0.9 0.85 0.8 0.75 0.7 mcf astar bzip2 hmmer sphinx SPECjbb graph500 Figure 8: Normalized performance (baseline runtime over runtime) for SPEC workloads on our local testbed for various RFA

application. Our experiments can therefore indirectly impact other customer s service only to the same extent as typical use. Test machines. Experiments To test an RFA, we require on control EC2 of at least two instances running on the same physical machine. As AWS does not provide this capability directly, we used known techniques [25] to achieve sets of co-resident m1.small instances on 12 different physical machines in the EC2 us.east-1c region. Specifically, we launched large numbers of instances of the same type and then used RTT times of network probes to check co-residence. Co-residence was confirmed using a cache-based covert channel. Nine of these were the same architecture: Intel Xeon E5507 with a 4MB LLC. We discarded the other instances to focus on those for which we had a large corpus, which are summarized in Figure 9. Arranged for co- resident placement of m1.small instances from accounts under our control Pair of co- resident instances used as stand- ins for vicsm and beneficiary Machine # Machine # Machine # E5507-1 4 E5507-4 3 E5507-7 2 E5507-2 2 E5507-5 2 E5507-8 3 E5507-3 2 E5507-6 2 E5507-9 3 Figure 9: Summary of EC2 machines and number of coresident m1.small instances running under our accounts. Each instance ran Ubuntu 11.04 with Linux kernel 2.6.38-11- virtual. For each machine, we choose one of the co-resident inless tha ferent to high instanc tween recove forman of its p 35 Normalized Performance

DemonstraSon on Amazon EC2 MCF: cache bound Apache: interrupts/data pollute cache Avg. RunSme (sec) 56 55 54 53 52 51 50 49 No- RFA RFA No APACHE 48 1 2 3 4 5 6 7 8 9 Trial # 36

What can cloud providers do? 1) Cloud cartography Amazon doesn t report Dom0 in traceroutes anymore 2) Checking for co- residence Amazon provides dedicated instances now. 3) Achieving co- residence They cost a lot more. Possible counter- measures: - Random Internal IP assignment - Isolate each user s view of internal address space - Hide Dom0 from traceroutes - Allow users to opt out of mulstenancy 4) Side- channel informason leakage Resource- freeing a9acks - Hardware or sovware countermeasures to stop leakage [Ber05,OST05,Page02,Page03, Page05,Per05] - Improved performance isolason

Untrusted provider A lot of work aimed at untrustworthy provider A9estaSon of cloud: Homealone: use L2 cache side- channels to detect presence of foreign VM RAFT: Remote Assessment of Fault Tolerance to infer if data stored in redundant fashion Keep data private: searchable or fully- homomorphic encrypson

More on cache- based physical channels Prime+Trigger+Probe combined with differensal encoding technique gives high bandwidth cross- VM covert channel on EC2 See [Xu et al., An ExploraSon of L2 Cache Covert Channels in Virtualized Environments, CCSW 2011] Keystroke Sming in experimental testbed similar to EC2 m1.small instances AMD Opterons CPU 1 Core 1 Core 2 CPU 2 Core 1 Core 2

More on cache- based physical channels Prime+Trigger+Probe combined with differensal encoding technique gives high bandwidth cross- VM covert channel on EC2 See [Xu et al., An ExploraSon of L2 Cache Covert Channels in Virtualized Environments, CCSW 2011] Keystroke Sming in experimental testbed similar to EC2 m1.small instances AMD Opterons CPU 1 Core 1 Core 2 CPU 2 Core 1 Core 2

More on cache- based physical channels Prime+Trigger+Probe combined with differensal encoding technique gives high bandwidth cross- VM covert channel on EC2 See [Xu et al., An ExploraSon of L2 Cache Covert Channels in Virtualized Environments, CCSW 2011] Keystroke Sming in experimental testbed similar to EC2 m1.small instances AMD Opterons CPU 1 Core 1 Core 2 CPU 2 Core 1 Core 2

More on cache- based physical channels Prime+Trigger+Probe combined with differensal encoding technique gives high bandwidth cross- VM covert channel on EC2 See [Xu et al., An ExploraSon of L2 Cache Covert Channels in Virtualized Environments, CCSW 2011] Keystroke Sming in experimental testbed similar to EC2 m1.small instances AMD Opterons VMs pinned to core CPU 1 Core 1 Core 2 CPU 2 Core 1 Core 2 We show that cache- load measurements enable cross- VM keystroke detecson Keystroke Sming of this form might be sufficient for the password recovery a9acks of [Song, Wagner, Tian 01]