IBM Research Zurich Christian Cachin June 2014 Cryptographic Security Mechanisms for Cloud Computing 2009 IBM Corporation
Cloud computing Compute Network Storage 2
Cloud computing 3 Cloud computing = IT outsourcing Resources are virtual (SDx = software-defined x) Infrastructure shared among many clients (= tentants) Automated and self-managed Standardized interfaces and solutions Providers amortize cost over many clients Clients rent services instead of owning equipment
Hardware becomes a commodity Servers... 4
Physical location becomes irrelevant Data center, Luleå (SE), near the Arctic circle 5
Benefits and challenges 6 Cloud services are convenient No investment cost Pay only for consumption Scalable No skills needed Access from everywhere Only standardized services Clouds pose threats Unknown exposure Inherent risk of outsourcing No established contracts Loss of control Fast and reliable network needed Customization not possible
Security concerns in cloud computing 7 Distinguish between traditional security concerns and cloud-specific issues Authentication (not only users, also services) Authorization (users and services) Data confidentiality Data integrity Data removal Monitoring Audits Forensics Isolation between tenants Protection of infrastructure (TCB - trusted computing platform)
Cloud security from two viewpoints Alice 8 Bob Charlie
Cloud-security concerns of the provider 9 Isolate different clients in the service platform Enforcement Verification Protect the infrastructure Trusted computing base (TCB) Integrity of hypervisors, kernels, and applications Strong enforcement with trusted hardware Prevent insider attacks Operators have reduced privileges
Multi-tenancy in cloud computing Client Application Middleware/JVM VM/Partition/OS Instance/Hypervisor Hardware Software-aaS 10 Platform-aaS Infrastructure-aaS Servers-aaS One application instance per client, using the same DB engine One DB engine or OS-process per client on the same OS kernel A dedicated OS instance per client, on the same machine instance Dedicated CPU and hypervisor per client, on the same shared hardware GMail, Dropbox, Facebook... Shared webhosting, Salesforce... Rackspace, Amazon EC2... IBM SoftLayer, Internap...
Cloud-security concerns of clients 11 Prevention of abuse by provider Restriction of administrative privileges Physical location, legal aspects ("jurisdiction attacks") Loss of control and audit mechanisms Physical direct access, log files Confidentiality of data? Client "encrypts" all data and computations in the cloud Integrity of data? Cloud proves the correctness of responses Who manages the keys and how? Cryptography is a powerful technology but merely shifts power to those who control the keys How to destroy data in the cloud? Control information proliferation
Computing on encrypted data 12 How can one manipulate encrypted data? How can a computer run an encrypted program without knowledge of what it does? Celebrated research topic in cryptography Formulated in 1978 Millionaires problem (Yao 1986) Secure two-party computation Garbled circuits Quite practical today for limited functions Fully Homomorphic Encryption Breakthrough result (Gentry 2009) but very far from practical Secret program P() and secret input y P( E(x), y ) E( x ) Secret data x Client E( P(x), y ) P(x,y)
Three projects addressing cloud security at IBM Research - Zurich 13
Key management in the cloud 14
Key management a solved problem? 15 Windows Azure storage service disruption (Feb. 2013) Expired SSL certificate Global outage of Azure cloud-storage service Created a cascading series of failures in Azure, eventually bringing down Xbox Live and other services Repaired after about 12 hours
Key management today 16 Proprietary solutions Every system requires its own format Often an afterthought to a secure system Life-cycle management operations are cumbersome Yet a cryptographic solution is only as secure as its key manager
Key management with secure hardware Smartcards nethsm (Thales) IBM 4765 Infineon TPM
Towards standardized key management Enterprise cryptographic environments Portals Production Database Collaboration & File Server Content Mgmt Systems LAN VPN WAN Disk Arrays Backup System Replica CRM Backup Disk ecommerce Applications Enterprise Applications Business Analytics Staging Dev/Test Obfuscation Email Backup Tape Key Management Interoperability Protocol Enterprise key management
Key management as a service Key management becomes a service Centralized control Lifecycle management Automate deployment Policy driven Focus on data-storage keys Tape, disks, filesystems Cloud storage OASIS Key Management Interoperability Protocol (KMIP) Vendor-neutral format for accessing key server in enterprise KMIP 1.0 (2010) IBM TKLM v2.0 (2011) Contributions from IBM Research - Zurich [BCH+10] 19 Key Management Interoperability Protocol (KMIP) IBM Security Key Lifecycle Manager (SKLM)
OASIS Key Management Interoperability Protocol (KMIP) OASIS XML? No! Client-server protocol Defines objects with attributes, plus operations Objects: symmetric keys, public/private keys, certificates, threshold key-shares... Attributes: identifiers, type, length, lifecycle-state, lifecycle dates, links to other objects... Operations: create, register, attribute handling... 20 Supported by multiple products today Mostly specific to storage-encryption market
Key management as a cloud service 21 Secure cloud computing requires key material in the cloud Key managers will become cloud services (keys-as-a-service) Standardization of protocols OASIS KMIP PKCS #11 Control access to keys Policy- and role-based
Stateless cryptographic hardware-security modules 22 IBM Enterprise PKCS#11 introduces virtualized cloud-key managers [VDO14] Hardware-security module (HSM) for cryptographic operations in trusted execution environment Keys stored in a HSM are physically bound to hardware (sic) Difficult to integrate with cloud platform Virtualization layer for HSMs Controlled by a master key in multiple worker HSMs Stateless hardware tokens Scalable throughput Bulk cryptographic operations and key management
Integrity and consistency of remote data 23
Cloud storage - data integrity? Kernel.org Linux repository was compromised in Aug. 2011 Linux kernel sources exposed, but public open-source anyway Thanks to cryptographic integrity protection in revision control system (git), kernel code modifications could be detected Who determines the "true" kernel sources? What if cloud service is subverted or client data are modified? 24
System model Alice 25 Bob Charlie Server S Normally correct Sometimes faulty (untrusted, potentially malicious... Byzantine) Clients: A, B, C... Correct, may crash Invoke operations on server Disconnected Small trusted memory Asynchronous No client-to-client communication
Operations should be atomic or "linearizable" Alice Bob 1 2 A B C 26 write(1,x) write(1,u) read(1) u write(2,w) read(1) u read(2) w
Server violates integrity with a replay attack Alice Bob 1 2 A B C 27 write(1,x) write(1,u) write(2,v) write(1,t) read(1) x write(2,w) read(1) u read(2) w
Fork-linearizability as a solution 28 Server may replay old state and present different views to clients Fork their views of history Cannot be detected by clients without communicating Run a protocol to impose fork linearizability Ensures that if server forks the views of two clients once, then their views are forked ever after they never again see each others updates or violation is exposed Maintains causality for all operations Every consistency or integrity violation results in a fork Best achievable guarantee for storage on untrusted server Forks can be exposed via a cheap external channel with low security Synchronized clocks Periodic operations/gossip
Fork-linearizability graphically A write(1,x) write(1,u) B write(2,v) write(1,t) read(1) x write(2,w) read(1) u C read(2) w w(1,t) View of A r(2) w View of C w(1,u) r(1) u w(1,x) w(2,v) r(1) x w(2,w) 29 View of B
Fork-linearizable services for cloud integrity verification 30 Goal If server is correct, then clients see linearizable service In any case (= even when server corrupted and violates spec), the clients respect fork-linearizability Makes it easy to detect consistency violations Storage systems SUNDR [MS02][LKMS04] Secure untrusted data repository CSVN [CG09] Integrity-protecting Subversion revision-control system FAUST: Fail-aware untrusted storage [CKS11] Never blocks, uses sporadic client-to-client messages Venus [SCCKMS10] Integrity-protecting cloud object storage Depot: Cloud storage with minimal trust [MSLCADW11] Generic collaboration services Blind Stone Tablet [WSS09] runs a relational database SPORC: Group Collaboration using Untrusted Cloud Resources [FZFF10] presents an editor for shared documents Services with commuting operations [CO13] uses authenticated data types for complex operations
Policy-based secure deletion 31
Data needs to be erased 32 Destroying data can be as critical as retaining it It all depends... Deletion is in interest of Clients and/or Providers Required by law European Data Protection Directive UK Data Protection Act US Fair Debt Collection Practices Act
Data can no longer be erased 33 Modern storage systems cannot erase data Common storage systems Remove directory pointers Mark space as free Data remains accessible on a lower-level API Storage interfaces have no operation for "really erase" Virtualized storage systems make deletion impossible Many layers of abstraction Software-defined storage (SDS), cloud storage Every storage layer repackages and caches data, this leaves traces
Approaches to securely delete data 34 Magnetic media must be overwritten many times Solid-state storage requires low-level access to controller No suitable interfaces exposed Encryption as a solution [BL96, TLLP10] Encrypt data Keep key(s) in controlled and erasable memory Destroying key(s) makes data inaccessible This work extend encryption-based approach with retention policy Caveat: Advances in cryptanalysis
System model User Secure deletion layer Implemented through encryption Small, controlled erasable memory M Stores key(s) Large, permanent memory Cannot be erased Contains protected data D Auxiliary state S Deletion operation Reads/writes/erases keys in M Writes to S Never touches bulk data D Secure deletion layer M S D 35
Secure deletion schemes with encryption Use a separate key for every protected item [P07, GKLL09, RCB12] To delete an item, destroy its key Huge master key, difficult to manage Deletion cost is constant k1 k2 k3 k4 k5 k6 k7 k8 k9 f1 36 One key encrypts multiple protected items Secure delete of one item rekey operation Choose fresh key Re-encrypt surviving items with new key Destroy old key Small master key Deletion cost is linear f2 f3 f4 f5 f6 f7 f8 f9 f6 f7 f8 f9 k f1 f2 f3 f4 f5
Secure deletion schemes with encryption Tree of keys [DFIJ99] For every tree node, super-key encrypts sub-keys Items protected by keys at leaves Delete one item rekey along path from root to deleted item Small master key Deletion cost is logarithmic k f1 37 f2 f3 f4 f5 f6 f7 f8 f9
Flexible deletion policies modeled by graph 38 Scheme supports arbitrary policies that are modeled as a monotone circuit AND, OR, and threshold gates Master key contains one key per attribute Deletion operations are fast Simply erase the keys of the deleted attributes May trigger rekey of recursively protected keys Implementation in secret-key setting Modular specification through composition Provably secure constructions (in a cryptographic model) Generalizes all existing schemes for cryptographic secure deletion
Policy graph for secure deletion Alice Bob Project_X OR OR AND p2 p3 p4 p1 OR 39 Exp_2014 Exp_2015 p5 Attributes at input nodes (Alice, Bob, Project_X ) Initially, all are viewed as FALSE Protection classes p1, p2, p3,... value according to Boolean expression Deletion operation specifies attribute(s), for example, Delete(Exp_2014) p2, p5 securely erased Delete(Alice) p2, p3 securely erased Delete(Bob) no effect; Delete(Project_X) p4, p5 securely erased
Prototype implementation Encrypting virtual file system in Linux (FUSE) System policy in a global configuration file Per-file policy and metadata in extended attributes Initialization delfs --secure_dir=/secure /raw_dir /delfs_dir /delfs_dir delfs FUSE \ Delete files according to attributes delfsctl delete /delfs_dir attribute... User /secure Periodic cleanup of unused raw storage delfsctl cleanup /delfs_dir /raw_dir 40
Secure deletion summary 41 Encryption-based approach suitable for any storage system Networked storage Cloud storage Secure deletion layer Similar to compression/encryption/deduplication... layers Current work on extension to cloud storage systems
Conclusion 42 Cloud computing is here to stay Commodity web services take over customized solutions Physical infrastructure becomes virtual Software-defined environments (SDx) Security remains a hot topic for cloud computing Cryptography remains the key technology realize security in the cloud Cryptography addresses multiple security needs Security for provider Security for clients
Questions? 43 Christian Cachin www.zurich.ibm.com/~cca/ Security research www.zurich.ibm.com/csc/security/ IBM Research - Zurich www.zurich.ibm.com
Literature (Key management) [BCH+10] M. Björkqvist, C. Cachin, R. Haas, X.-Y. Hu, A. Kurmus, R. Pawlitzek, and M. Vukolic, "Design and implementation of a key-lifecycle management system," Proc. Financial Cryptography, 2010. [VDO14] T. Visegrady, S. Dragone, M. Osborne, "Stateless cryptography for virtual environments," IBM J. Res. & Dev., 2014. 44
Literature (Integrity and consistency) [CO13] C. Cachin and O. Ohrimenko, "On verifying the consistency of remote untrusted services," Research Report RZ 3841, IBM Research, 2013. [C11] C. Cachin, "Integrity and consistency for untrusted services," in Proc. Current Trends in Theory and Practice of Computer Science (SOFSEM 2011) (I. Cerna et al., eds.), LNCS 6543, 2011. [CG09] C. Cachin and M. Geisler, "Integrity protection for revision control," in Proc. ACNS, LNCS 5536, 2009. [CKS11] C. Cachin, I. Keidar, and A. Shraer, "Fail-aware untrusted storage," SIAM Journal on Computing, vol. 40, Apr. 2011. [CSS07] C. Cachin, A. Shelat, and A. Shraer, "Efficient fork-linearizable access to untrusted shared memory," in Proc. PODC, 2007. [SCCKMS10] A. Shraer, C. Cachin, A. Cidon, I. Keidar, Y. Michalevsky, and D. Shaket, "Venus: Verification for untrusted cloud storage," in Proc. ACM Workshop on Cloud Computing Security (CCSW 2010), 2010. 45
Literature (Integrity and consistency, cont.) [FZFF10] A. Feldman, P. Zeller, M. Freedman, E. Felten, "SPORC: Group Collaboration using Untrusted Cloud Resources", Proc. OSDI, 2010. [LKMS04] J. Li, M. Krohn, D. Mazieres, and D. Shasha, "Secure untrusted data repository (SUNDR)," in Proc. OSDI, 2004. [MS02] D. Mazieres and D. Shasha, "Building secure file systems out of Byzantine storage," in Proc. PODC, 2002. [MSLCADW11] P. Mahajan et al., "Depot: Cloud Storage with Minimal Trust", ACM TOCS, 2011. 46
Literature (Secure deletion) [CHHS13] C. Cachin, K. Haralambiev, H.-C. Hsiao, A. Sorniotti, "Policy-based secure deletion," in Proc. ACM Conference on Computer and Communications Security (CCS 2013), 2013. [BL96] D. Boneh and R. Lipton, "A revocable backup system," in Proc. 6th USENIX Security Symposium, 1996. [DFIJ99] G. Di Crescenzo, N. Ferguson, R. Impagliazzo, M. Jakobsson, "How to forget a secret," in Proc. 16th Symposium on Theoretical Aspects of Computer Science (STACS), LNCS 1563, 1999. [GKLL09] R. Geambasu, T. Kohno, A. Levy, H. Levy, "Vanish: Increasing data privacy with self-destructing data," in Proc. 18th USENIX Security Symposium, 2009. [P07] R. Perlman, "File system design with assured delete," in Proc. Network and Distributed Systems Security Symposium (NDSS), 2007. [TLLP10] Y. Tang, P. Lee, J. Lui, R. Perlman, "FADE: Secure overlay cloud storage with file assured deletion," in Proc. Securecomm, 2010. 47