A Multi-Cloud based Approach to Enhance Data Security and Availability in Cloud Storage



Similar documents
Secure Framework for Data Storage from Single to Multi clouds in Cloud Networking

MIGRATION FROM SINGLE TO MULTI-CLOUDS TO SHRIVEL SECURITY RISKS IN CLOUD COMPUTING. K.Sireesha 1 and S. Suresh 2

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments

Ensuring Data Storage Security in Cloud Crop

DepSky Dependable and Secure Storage in a Cloud-of-Clouds Alysson Bessani, Miguel Correia, Bruno Quaresma, Fernando André, Paulo Sousa

A Survey on Security Issues and Security Schemes for Cloud and Multi-Cloud Computing

EFFICIENT AND SECURE DATA PRESERVING IN CLOUD USING ENHANCED SECURITY

DESIGN AND IMPLEMENTATION OF A SECURE MULTI-CLOUD DATA STORAGE USING ENCRYPTION

A survey on cost effective multi-cloud storage in cloud computing

Secure Store of User Authentication Tokens in Multi-cloud Storage System

Index Terms Cloud Storage Services, data integrity, dependable distributed storage, data dynamics, Cloud Computing.

Identifying Data Integrity in the Cloud Storage

Tufts University. Department of Computer Science. COMP 116 Introduction to Computer Security Fall 2014 Final Project. Guocui Gao

Session 11 : (additional) Cloud Computing Advantages and Disadvantages

Cloud Computing Security: From Single to Multi-Clouds

Verifying Correctness of Trusted data in Clouds

Web Services & Database Services Availability through Multi-Cloud Environment

AN EXPOSURE TO RELIABLE STORAGE SERVICES IN CLOUD COMPUTING

Cloud SQL Security. Swati Srivastava 1 and Meenu 2. Engineering College., Gorakhpur, U.P. Gorakhpur, U.P. Abstract

Data Storage Security in Cloud Computing for Ensuring Effective and Flexible Distributed System

A Secure & Efficient Data Integrity Model to establish trust in cloud computing using TPA

Improving data integrity on cloud storage services

Scientific Journal Impact Factor (SJIF): 1.711

CLOUD COMPUTING AND ITS SECURITY ASPECTS

How To Design A Cloud Data Storage Service For A Cloud Computer System

Cloud Computing Security Issues and Controls

Security Evaluation Using Shamir s Algorithm in Multi Cloud Data Storage

Secure cloud access system using JAR ABSTRACT:

Everything You Need To Know About Cloud Computing

SECURE AND TRUSTY STORAGE SERVICES IN CLOUD COMPUTING

Secure Data transfer in Cloud Storage Systems using Dynamic Tokens.

Analysis on Secure Data sharing using ELGamal s Cryptosystem in Cloud

How To Ensure Correctness Of Data In The Cloud

HIPAA CRITICAL AREAS TECHNICAL SECURITY FOCUS FOR CLOUD DEPLOYMENT

Ensuring Data Storage Security in Cloud Computing By IP Address Restriction & Key Authentication

Keyfort Cloud Services (KCS)

Cloud Data Protection for the Masses

IMPLEMENTATION OF RESPONSIBLE DATA STORAGE IN CONSISTENT CLOUD ENVIRONMENT

February. ISSN:

Secrecy Maintaining Public Inspecting For Secure Cloud Storage

Cloud Computing: Issues Related with Cloud Service Providers

Dynamic Query Updation for User Authentication in cloud Environment

OVERVIEW OF SECURITY ISSUES IN CLOUD COMPUTING

Mapping Your Path to the Cloud. A Guide to Getting your Dental Practice Set to Transition to Cloud-Based Practice Management Software.

A Survey on Scalable Data Security and Load Balancing in Multi Cloud Environment

The Magical Cloud. Lennart Franked. Department for Information and Communicationsystems (ICS), Mid Sweden University, Sundsvall.

Cloud Computing for Libraries: A SWOT Analysis

Data Integrity and Dynamic Storage Way in Cloud Computing

Security Model for VM in Cloud

Module 1: Facilitated e-learning

CLOUD COMPUTING SECURITY ARCHITECTURE - IMPLEMENTING DES ALGORITHM IN CLOUD FOR DATA SECURITY

Designing a Cloud Storage System

CLOUD TECHNOLOGY IMPLEMENTATION/SECURITY

A Secure and Dependable Cloud Storage Service in Cloud Computing

Security Considerations for Public Mobile Cloud Computing

Peer-to-peer Cooperative Backup System

Cloud Computing - Architecture, Applications and Advantages

A Proxy-Based Data Security Solution in Mobile Cloud

Increased Security, Greater Agility, Lower Costs for AWS DELPHIX FOR AMAZON WEB SERVICES WHITE PAPER

The data which you put into our systems is yours, and we believe it should stay that way. We think that means three key things.

Cloud Database Storage Model by Using Key-as-a-Service (KaaS)

A Comprehensive Data Forwarding Technique under Cloud with Dynamic Notification

A Survey on Data Integrity of Cloud Storage in Cloud Computing

Secure Way of Storing Data in Cloud Using Third Party Auditor

Data Protection: From PKI to Virtualization & Cloud

SECURITY ENHANCEMENT OF GROUP SHARING AND PUBLIC AUDITING FOR DATA STORAGE IN CLOUD

Erasure correcting to enhance data security in cloud data storage

Cyber Security. An Executive Imperative for Business Owners. 77 Westport Plaza, St. Louis, MO p f

Understanding Enterprise Cloud Governance

Cloud Computing for SCADA

"ASM s INTERNATIONAL E-Journal on Ongoing Research in Management and IT"

Sync Security and Privacy Brief

RIGOROUS PUBLIC AUDITING SUPPORT ON SHARED DATA STORED IN THE CLOUD BY PRIVACY-PRESERVING MECHANISM

An Approach Secret Sharing Algorithm in Cloud Computing Security over Single to Multi Clouds

International Journal of Innovative Technology & Adaptive Management (IJITAM) ISSN: , Volume-1, Issue-5, February 2014

nwstor Storage Security Solution 1. Executive Summary 2. Need for Data Security 3. Solution: nwstor isav Storage Security Appliances 4.

Data Protection Act Guidance on the use of cloud computing

Who moved my cloud? Part I: Introduction to Private, Public and Hybrid clouds and smooth migration

For more information on how to build a HIPAA-compliant wireless network with Lutrum, please contact us today!

Near Sheltered and Loyal storage Space Navigating in Cloud

Security. CLOUD VIDEO CONFERENCING AND CALLING Whitepaper. October Page 1 of 9

Energy Efficiency in Secure and Dynamic Cloud Storage

Transcription:

A Multi-Cloud based Approach to Enhance Data Security and Availability in Cloud Storage Siva Rama Krishna T. a, * Dr. A. S. N. Chakravarthy a, Naveen Kumar G. b a Department of Computer Science and Engineering, University College of Engineering Vizianagaram - JNTUK, Vizianagaram, India b Department of Information Technology, University College of Engineering Vizianagaram - JNTUK, Vizianagaram, India Abstract Cloud computing is growing at a rapid pace, because of its promising low investment practices and the ease of access to data and services. But the security of sensitive data is an open challenge in untrusted cloud environments. Due to the single point failure model, single clouds possess the risk of service unavailability. The possibility of insider attacks in single clouds is also worrying cloud investors. As a remedy, movement towards multi-cloud environments is emerging. This paper focuses on recent research related to cloud security and highlights the advantages of high security and high data availability that is found using the multi-clouds. This paper proposes a new way of handling the cloud storage which is fairly simple to manage and also is robust to most of the data attacks that usually occur in the present cloud storage scenario. This paper explained the added advantage of using the existing infrastructure in a far effective way than that is being done now. This paper aims at relieving the end users of any concerns about the safety of the data being stored in the cloud storage and the availability of the services. Keywords: cloud computing; cloud storage; multi cloud; service availability; Depsky architecture; data security; malicious insider; byzantine fault. 1. Introduction Addressing privacy and security issues must high prioritized in cloud environments. Single cloud environments are becoming less popular due to their inefficiency in dealing with service availability issues and malicious insiders. In recent years, there has been a move towards multiclouds, inter-cloud or cloud-of-clouds [1]. This paper focused on the issues related to the data security and data availability aspects of single cloud storage. As data and information will be shared with a third party, cloud computing users want to avoid an untrustworthy cloud provider. Protecting private and important information, such as credit card details or a patient s medical records from attackers or malicious insiders is of critical importance. In addition, the potential for migration from a single cloud to a multi-cloud environment is examined and research related to security issues in single and multiclouds in cloud computing is surveyed. Main objectives of this paper are: Providing better accessibility to the data stored in the cloud. Effective security enforcements to the sensitive data in the cloud. Increasing the reliability of the storage services provided. Protective measures against the Insider-Hacking of Data. The remaining part of the paper is organized as follows: Section 2 emphasizes the benefits of using cloud services. Section 3 gives the existing scenario of cloud storage, challenges and risks in that. Section 4 analyzes the migration to multi-clouds and presents a brief overview of DepSky architecture. Section 5 presents the similar research done on multi cloud storage models. Section 6 presents the proposed cloud storage model with detailed insight of it. * Corresponding author. E-mail: t_srkrishna@yahoo.com 58

Section 7 presents the novelty and benefits of migrating to proposed model. Section 8 analyses the proposed model with respect to the attacks and faults it can withstand. Section 9 concludes the paper with a note on the future work. 2. Benefits with Cloud Services Some key benefits of using cloud services are listed below: Agility improves with users' ability to re-provision technological infrastructure resources. Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is converted to operational expenditure. Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another. Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery. Scalability and elasticity via dynamic ("ondemand") provisioning of resources on a finegrained, self-service basis near real-time, without users having to engineer for peak loads. Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places. On-demand self-service allows users to obtain, configure and deploy cloud services themselves. 3. The drawbacks with existing Cloud Models The top three concerns in current cloud market are: Security, Service Availability and Performance [2]. This paper mainly focused on the first two. 3.1. Security Risks Although cloud service providers can offer benefits to users, security risks play a major role in the cloud computing environment. According to a recent IDC survey, the top challenge for 87% of IT executives in relation to cloud computing is security [2]. Protecting private and important information such as credit card details or patients medical records from attackers or malicious insiders is of critical importance. Resources in the cloud are accessed through the Internet; consequently even if the cloud provider focuses on security in the cloud infrastructure, the data is still transmitted to the users through networks which may be insecure. As a result, internet security problems will affect the cloud, with greater risks due to valuable resources stored within the cloud and cloud vulnerability. Data intrusion of the cloud through the Internet by hackers and cybercriminals also needs to be addressed. The two security factors that particularly affect single clouds are data integrity, and data intrusion. 3.1.1. Data Integrity One of the most important issues related to cloud security risks is data integrity. The data stored in the cloud may suffer from damage during transition operations from or to the cloud storage provider [3] [4]. An example of breached data occurred in 2009 in Google Docs, which triggered the Electronic Privacy Information Centre for the Federal Trade Commission to open an investigation into Google s Cloud Computing Services [5]. Another example of a risk to data integrity occurred in Amazon S3 where users suffered from data corruption [6]. It is argued that when multiple clients use cloud storage or when multiple devices are synchronized by one user, it is difficult to address the data corruption issue [7]. One of the solutions that are proposed is to use a Byzantine fault-tolerant replication protocol within the cloud. 3.1.2. Data Intrusion Another security risk that may occur with a cloud provider, such as the Amazon S3, is a hacked password or data intrusion. If someone gains access to an Amazon account password, they will be able to access all of the account s instances and resources. Thus the stolen password allows the hacker to erase all the information inside any virtual machine instance for the stolen user account, modify it, or even disable its services. Furthermore, there is a possibility for the user s email (Amazon user name) to be hacked, and since Amazon allows a lost password to be reset by email, the hacker may still be able to log in to the account after receiving the new reset password. 3.2. Service Availability Another major concern in cloud services is service availability. Amazon mentions in its licensing agreement [8] that it is possible that the service might be unavailable from time to time. In addition, if any damage occurs to any Amazon web service and the service fails, in this case there will be no charge to the Amazon Company for this failure. Companies seeking to protect services from such failure need measures such as backups or use of multiple providers. Both Google Mail and Hotmail experienced service down-time recently [9] [10]. If a delay affects payments from users for cloud storage, the users may not be able to access their data. Due to a system administrator error, 45% of stored client data was lost in the LinkUp (MediaMax) as a cloud storage provider [11]. 59

3.3. Limitations of Existing System Here is the brief listing of limitations in the traditional cloud storage service: Sensitive data is prone to Insider Hacking. Security protocols being implemented are outdated. Data service availability is moderate. Service failure rates are high. Byzantine fault rates are high. Data Grabbing is a highly possible threat. 4. Migrating to Multi-Clouds This section elaborates on the migration of cloud computing from single to multi-clouds to ensure the security of the user s data. 4.3. DepSky System: Multi-Clouds Model The DepSky system [14] addresses the availability and the confidentiality of data in their storage system by using multi-cloud providers, combining Byzantine quorum system protocols, cryptographic secret sharing and erasure codes. 4.3.1. DepSky Architecture The DepSky architecture (Fig. 1) [14] consists of four clouds and each cloud uses its own particular interface. The DepSky algorithm exists in the clients machines as a software library to communicate with each cloud. These four clouds are storage clouds, so there are no codes to be executed. The DepSky library permits reading and writing operations with the storage clouds. 4.1. Introduction to Multi-Clouds The term multi-clouds is similar to the terms interclouds or cloud-of-clouds [1] that were introduced by Vukolic. These terms suggest that cloud computing should not end with a single cloud. Recent research has focused on the multi-cloud environment which control several clouds and avoids dependency on any one individual cloud [12]. Research identified two layers in the multi- cloud environment: the bottom layer is the inner-cloud, while the second layer is the inter-cloud. In the inter-cloud, the Byzantine fault tolerance finds its place. 4.2. Byzantine Protocols In cloud computing, any faults in software or hardware are known as Byzantine faults that usually relate to inappropriate behaviour and intrusion tolerance. In addition, it also includes arbitrary and crash faults. Much research has been dedicated to Byzantine Fault Tolerance (BFT) since its first introduction. Although BFT research has received a great deal of attention, it still suffers from the limitations of practical adoption and remains peripheral in distributed systems [13]. As mentioned earlier, BFT protocols are not suitable for single clouds. Vukolic argues that one of the limitations of BFT for the inner-cloud [3] [4] is that BFT requires a high level of failure independence, as do all fault-tolerant protocols. If Byzantine failure occurs to a particular node in the cloud, it is reasonable to have a different operating system, different implementation, and different hardware to ensure such failure does not spread to other nodes in the same cloud. In addition, if an attack happens to a particular cloud, this may allow the attacker to hijack the particular inner-cloud infrastructure. Fig. 1. The DepSky Model of Cloud Storage 4.4. Analysis of Multi-Cloud Storage Moving from single clouds or inner-clouds to multiclouds is reasonable and important for many reasons. According to research [3], Services of single clouds are still subject to outage. In addition, a survey [15] showed that over 80% of IT executives fear security threats and loss of control of data and systems. We assume that the main purpose of moving to interclouds is to improve what was offered in single clouds by distributing reliability, trust, and security among multiple cloud providers. In addition, reliable distributed storage which utilizes a subset of BFT techniques is used in multi-clouds. 5. Related Work There has been a little viable research in the direction of using multiple clouds to store data but the following have been observed by us: In the model of G.Rakesh Reddy, et al [16], there was a usage of two entities i.e. storage locations to store the data. The user is given the choice to store between the two of these storage locations based on the budget he/she can afford. But this model does not successfully secure the user 60

content since the plain data is present on the server and it is prone to insider hacking which is one of the major threats that the current storage models being implemented face. In the other model by Alysson Bessani, et al [14], DepSky Architecture was implemented. Basically the functionality of the entire system is divided into 3 layers namely: Conceptual Data Unit, Generic Data Unit and Implementation Data Unit. The data stored can be of arbitrary size and of variable versions on any given data storage unit. This model does not withstand the insider hacking, data theft and man-in-the-middle attacks. In addition to this the model has data entities that have specific functionalities thereby eliminating the chance of usage of multiple clouds to distribute the load on the servers being used in the architecture. Besides this model allow variable sized files and versions that could lead to ambiguity. being uploaded. This gives the second layer of security. It makes sure that only the authorized user can read the respective file. Blowfish algorithm is chosen for encryption as is highly secure, has a high data manipulation rate and has the weakest cryptanalysis to-date. 6. Proposed Cloud Storage Model With roots in DepSky architecture, a cloud storage model is proposed below, promising more secured and more available service than the existing. This approach is broadly divided into two phases: Upload Phase and Download Phase. The algorithms of these two phases execute in the client machine. And 4 autonomous traditional cloud storage servers are required to get these algorithms work. These servers might present in different geographical locations, and even from different cloud service providers. 6.1. Upload Phase The data to be uploaded is subjected to the following procedures and in the order (as shown in Fig. 2): Compression Encryption Splitting Replication The data file is first subjected to compression giving us two advantages: reducing the file content to be uploaded and providing an overall envelope to the data being transferred. In the proposed system the compression ensures at least 2% deflating of the data to be uploaded using zip. This paper focuses on usage of multiple clouds instead of a single cloud. This obviously leads to the focus to be shifted on usage of more network resources than the existing system. Compression reduces this extra utilization and stress on the network bandwidth usage to some extent. Compression also provides the first layer of security to the file by enveloping the content. This helps in later stages when the file is split, all the split parts of the original file have to be rejoined in the same order. In absence of which the decompression throws a check sum error that would immediately stop the process and thereby protecting the user data. The compressed file is then subjected to encryption using Blowfish algorithm, to ensure the security of the data Fig. 2. Process Flow in Upload Phase of the Proposed Model The encrypted file is then subjected to split into four chunks. Then the file chunks are replicated. And finally these chunks are randomly stored in four autonomous cloud servers, with each server having two different chunks of a file. The advantages of having this process are: since data is available in multiple cloud servers, failure of one or two servers doesn t affect the data availability and even in case of data loss the replicated file chunks act as backup. And since only chunks were replicated once there is only 100% excess storage required, but in traditional systems entire file is replicated three times making the storage requirement 300% excess. This procedure also provides the third layer of security by making sure that the entire file content is never present at one location. The details of chunks and their respective storage locations are recorded in a log for proper retrieval of file. Since there are replicas each chunk of a file makes two entries in the log file. 61

6.2. Download Phase This phase is a simple inversion of the upload phase. The location of file chunks is found by a simple search in the log file. Since each chunk has a replica, only one entry in log is used for download, if the first entry didn t work (service down) the other entry is used. All the four chunks of file are to be downloaded, rejoined in the same order of split followed by decryption with the same key supplied to encryption and then decompression, to get the original data file. 7. Novelty of the Proposed Model The main objective of the model is to decentralize the data storage system that is being currently implemented. By following the current model the following advantages were observed: 7.1. Load The load on the servers can be reduced since every user accesses the server to store only a fraction of the file instead of the entire file thereby reducing the interaction time with one given server out of the four in the model. 7.2. Secured content Communication In this model we send/receive data that is split and encrypted. Therefore the content is secure when it, leaves/comebacks to, the user terminal. This advantage could be furthered by sending the data over a secure SSL channel which makes sure there is no foul play such as data grabbing, phishing. This aspect is further discussed in Section 8. 7.3. Augmented Data Access The data is split and stored in four different servers. Therefore under the circumstances that one of the servers is down it is coded in such a way that the file can be accessed by the user by communicating with only three of the four servers in the architecture. This comes into picture when one of the servers is down due to maintenance or is under attack or is under a heavy stress due to the load on it. 7.4. Preservation of Data Integrity Since there is a replication of every partition of the previous upload, in circumstances where the integrity of the data is under question we already have a reference file that is reliable which can be used to restore the user data to the reliable state. This could prove beneficial in situations where the data has been modified by an unauthorized party or also when the user is accessing the file through multiple devices such as Tablets, PCs and Laptops. 8. Security and Availability Audit of the Proposed Model The major strength of this model lies in the three level security proposed in section 6. The proposed model can withstand the following attacks and faults: 8.1. Insider Hacking The current model is facing the huge threat of insider hacking of sensitive data where in if a hacker/malicioususer were to bypass the security and got access to the storage server, there are chances of data-theft and datamanipulations. This is because the files are entirely stored at one location. The proposed model eliminates this by splitting and distributing the data files into four different locations rather than one single location. Even when one of the locations is breached it is ineffective being the content at any single location is encrypted and not complete and even enveloped by compression algorithm. Another advantage with the proposed model is that the location that is attacked functions as an early warning system that alerts the security enforcers of the service provider. 8.2. Man-in-the-Middle Attack The data is split and encrypted hence the files when being uploaded/downloaded cannot be accessed entirely. Hence a man-in-the-middle attack will yield no fruitful result as the entire content encrypted and is never present at one instant and at a single location. 8.3. Byzantine Faults and Manual Faults Byzantine faults and manual faults may results into data loss and/or data inconsistency. The proposed model deals with these faults pretty effectively as there are always two instances of the same split file at two different locations. This makes sure that a file chunk is always accessible even when one of the four servers is under stress or is irresponsive or is failing to process a request efficiently. The file integrity is also maintained as the back-ups ensure a safety net that helps in maintaining a reliable instance of the particular file. 9. Conclusion and Future Work This paper could address the two important aspects that are targeted the end-user: to make the user feel secure about the sensitive data being stored in the cloud and to guarantee the user that his data is always available. The data security protocols being followed in the paper ensure an extremely secure environment for the user to place his sensitive data. The proposed model is invulnerable to many of the attacks that are presently 62

affecting the cloud users. This model also ensures resilient cloud storage. In the proposed model, four cloud servers are used that are present at random locations. It is a given fact that the data transfer rate is inversely proportional to the geographical location of these servers. When a server is selected randomly the data rate via server access takes a hit and is unpredictable. Instead if efficient algorithms to identify optimal servers are used, to place the file partitions based on weights and location based on the user, the speed of file access increases thereby increasing overall efficiency. In the future work, we want to add the above said feature to the proposed model. References [1] Mohammed A. AlZain, Eric Pardede, Ben Soh, James A. Thom. Cloud computing security: from single to multi-clouds. In: Proceedings of the 45th Hawaii International Conference on System Sciences: 2012 Jan 4-7; Grand Wailea, Maui, Hawaii. IEEE: 2012. p 5490-5499. [2] Frank Gens. New IDC IT cloud services survey: top benefits and challenges. IDC Enterprise Panel; 2009. Available from: URL: http://blogs.idc.com/ie/?p=730. [3] C. Cachin, I. Keidar, A. Shraer. Trusting the cloud. ACM SIGACT News 2009; 40: 81-86. [4] C. Cachin, S. Tessar. Optimal resilience for erasure-coded Byzantine distributed storage. In: Proceedings of the 19th International Conference on Dependable Systems and Networks: 2006 Jun 25-28; Philadelphia, USA. IEEE: 2006. p 497-498. [5] EPIC v. Google Inc., Complaint and request for injunction, request for investigation and for other relief in the matter of Google, Inc. and cloud computing services (Federal Trade Commission 2009). [6] Amazon cloud service goes down and takes popular sites with it. The New York Times 2012 Oct 22; Sect. Technology. [7] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, D. Song. Provable data possession at untrusted stores. In: Proceedings of the14th ACM Conference on Computer and Communications Security: 2007 Oct 29 Nov 02; Alexandria, USA. ACM: 2007. p 598-609. [8] Amazon Web Services customer agreement. Amazon Web Services Inc.; 2012. [9] Gmail goes down in India. The Times of India 2013 Jun 12; Sect. TOI Tech. [10] Arthur de Haan. Details of the Hotmail / Outlook.com outage. Outlook Blog; 2013. Available from: URL: http://blogs.office.com/b/microsoft-outlook/archive/2013/03/13/ details-of-the-hotmail-outlook-com-outage-on-march-12.aspx. [11]Loss of customer data spurs closure of online storage service The Linkup. Network World News 2008 Aug 11. [12] C. Cachin, R. Haas, M. Vukolic. Dependable storage in the intercloud. IBM Research Report RZ 3783. IBM Research; 2010. [13] I. Abraham, G. Chockler, I. Keidar, D. Malkhi. Byzantine disk paxos: optimal resilience with Byzantine shared memory. Distributed Computing 2006; 18(5): 387-408. [14] Alysson Bessani, Miguel Correia, Bruno Quaresma, Fernando André, Paulo Sousa. DepSky: dependable and secure storage in a cloud-ofclouds. In: Proceedings of EuroSys'11 - the sixth conference on Computer systems: 2011 Apr 10-13: Salzburg, Austria. ACM: 2011. p 31-46. [15] 2009 global survey of cloud computing. Avanade and Kelton Research; 2009. [16] G. Rakesh Reddy, Dr. M.B. Raju, Dr. B. Ramana Naik. A Novel Approach for Multi-Cloud Storage security in Cloud Computing. Int J Comp Sci and Inf Tech 2012; 3 (5): 5043-504. 63