IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY TITLE: DATA PLACE & MANAGE SECURITY IN CLOUD COMPUTING FOR EFFECTIVE AND FLEXIBLE DISTRIBUTED SYSTEM 1 Mr. Pritam S. Bhoyar, 2 Mr. Abhijeet A. Sawankar 1 Student C.S.E. J.D.I.E.T., Yavatmal, Maharashtra, India, Pritambhoyar1@gmail.com 2 Student ExTC. J.D.I.E.T., Yavatmal, Maharashtra, India, abhijeetsawankar@gmail.com Abstract This paper, proposes an effective and flexible distributed scheme with explicit dynamic data support to ensure the correctness of users data in cloud. We realy sure on correcting code in file distribution preparation to provide redundancies and guarantee the data dependancy. This construction acting with force reduces the communication and storage overhead as compared to the traditional replication based file different techniques. By utilizing the same token with distributed verification of a rubbing out coded data, which is scheme achieves storage correct insurance as well as data error to decide place, whenever data corruption has detected during the storage correctness confirmation, this scheme can almost try to same time decide place of data errors, i.e., the verification of misbehaving server(s). The new scheme further supports secure and efficient dynamic operations on data, including this data delete, update and append. Extensive performance analysis and security shows that the proposed scheme is highly efficient and resilient against Byzantine failure, ill-will data qualification attack, wheneven server colliding attacks. Index Terms: Byzantine failure, homomorphism token, distributed scheme, cloud. ------------------------------------------------------***------------------------------------------------------------- 1.INTRODUCTION Several trends are opening up the era of Cloud Computing which is developing Internet-based and use of computer technology. These is cheaper and most strong processors, combine with the software as a service (SaaS) computing architecture which are transmitted data centers into pools of computing service on a very large scale. Almost growth network bandwidth and trustworthy yet flexible network connections make it even possible that users can now subscribe high quality services from data and software that reside solely on remote data centers. Traveling data into the cloud to propose a great convenience to users since they don t have to care about the complexities of direct hardware management. An explorer of Cloud Computing vendors, Amazon Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2) are both wellknown examples. And also internet- based online services do provide huge amounts of storage space and customizable computing resources, this computing raised level surface shift. However, this is eliminating responsibility Of Local machines for data maintenance at the same time. Users are at the kindness of their cloud service providers for the availability and integrity of their data. From the perspective of data security, which is always an important aspect of quality of services in Cloud Computing inevitably, faces new challenging security threats for number of reasons. First, traditional cryptographic clumsy for the purpose of data security protection can not be directly adopted due to the users loss control of data under Cloud Computing. Therefore, identification of correct data storage in the cloud must be conducted without explicit knowledge of the total data. Considering different types of data for each user stored in the cloud and the demand of long term continuous assurance of their data safety as well as secure, problem of identifying correctness of data storage in the cloud becomes
even more challenging. One thing clear here, Cloud Computing is not just a third party data warehouse. Fig.1 (a) Structure of a cloud The data stored in the cloud may be frequently updated by the users, including insertion deletion,modification, add something, reordering, etc. To check storage correctness under dynamic data update is hence of their importance. However, this having to do with force feature also makes traditional integrity insurance techniques futile and entails new solutions. The progress of Cloud Computing is data centers running at same time, co-operated and different manner. Personally user s data is redundantly stored in multiple physical locations to further reduce the data integrity threats. Therefore, surffing protocols for storage correctness assurance will be of most importance in achieving a robust and secure cloud data storage system in the real world. Recently, the importance of fallowing the remote data integrity has been highlighted by the following research works. While can be always useful to ensure the storage correctness without having users possessing data coming from their storage, can not be address all the security threats aspect in cloud data storage, in this way they are all concentrating on single server scenario and most of them do not consider dynamic data operations. As a fully aspect, researchers have also proposed distributed protocols for ensuring storage correctness across multiple servers. Again, none of these different schemes is aware of dynamic data operations. As a result, their suitability in cloud data storage can be drastically limited. Fig.1 (b) Overview of a cloud 1.CLOUD ARCHITECTURE
Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, mostly involves number of cloud components communicating with each other over loose coupling mechanism such as messaging. The two moslty significant components of cloud computing architecture are known as front end and back end. As we know, Front end is part seen by the client, i.e., user. Also includes the client s network (or computer) and the applications used to access the cloud via a user interface such as a web browser of server. Back end computing architecture is the cloud itself, compare different computers, data storage devices and servers. 2.DESIGN METHODOLOGIE The data design transforms the information domain model created during analysis into the data structures that will be required to implement software. The relationships and data objects defined in the entity relationship diagram and the detailed data content depicted in the data dictionary provide the basis for the data design activity of servers. Is the part of data design must occur in conjunction with the design of software architecture. The architectural data design defines relationship between major structural elements of software, data design patterns that may be used to achieve the requirements the system architecture, component level detail and interface representation. During each data design activity, users apply basic concepts and principles that lead to high quality. Fig. 2: Cloud data storage architecture Representative network architecture for cloud data storage is display in Figure2. Also it containing three different network entities can identified as follows: Users, who have been data to be stored in that cloud and rely on the cloud for data computation based on both individual organizations and consumers. A CSP, who really wants to significant resources and expertise in building and managing distributed cloud storage servers as well as operates live Cloud Computing system. An optional TPA, who has specially not only capabilities but also expertise that users may not be trusted to assess as well as expose risk of cloud storage services on behalf of the users upon request. Figure.3 Translating Analysis model into Design model The interface design describes how the software communicates within itself. An interface implies a flow of information (e.g., data and/or control) and a specific type of behavior. Therefore, it will be control and data flow diagrams provide much of the information required for interface data design. The component-level of data design transforms structural elements of the software architecture into a procedural description of software components. About more information obtained from this PSPEC, CSPEC, and STD contains as the most basis for component data design. During data design users make decisions that will ultimately affect the success of software construction and as important, the ease with which
software can be maintained. To confirm that security and fidelity for cloud data storage under the aforesaid antagonist model, we aim to try data design efficient mechanisms for dynamic data verification and operation and achieve the following goals: (1) Storage correctness: to ensure users that their data are indeed stored appropriately and kept unharmed all the time in cloud. (2) Fast services as well as localization of data error to effectively locate the faulty server when data corruption has detected. (3) Dynamic data design support to maintain same level of storage correctness assurance even if users correct, append or delete their data files in cloud. (4) Dependance to enhance data availability against malevolent data modification as well as server clouding attacks i.e. minimizing effect of server failures or data errors. (5) Lightweight users to perform storage correctness checks with minimum overhead. 3.ENSURING CLOUD DATA STORAGE In cloud data storage system, users store their data in the cloud and no longer possess the data locally. Thus, availability and correctness of the data files being stored on the distributed cloud servers must be guaranteed like this. One of key issues is to effectively detect any unauthorized data corruption and modification, due to server compromise and random Byzantine failures. Besidely, in different case when such inconsistencies are successfully detected to search which server data error lies in is also of great significance, therefore it can be first step to do fast recover the storage errors. To tolds these upcoming problems our main scheme for ensuring cloud data storage is presented in this section. First part of section is devoted to a review of basic tools from coding theory that is needed in our scheme for file distribution across cloud servers. The token computationly we were considering belongs to a family of universal hash function [11], chosen to reserve properties, which can perfectly integrated with verification of erasure-coded data [8] [12].Subsequently, it is also shown how to derive a challenge response protocol for verifying the storage correctness as well as identifying misbehaving servers. 4..Security Analysis and Performance Evaluation Our security analysis focuses on the adversary model as a defined. We are also evaluate efficiency of our scheme via implementation of both file distribution preparation and verification token precomputation. In our scheme, servers are required to operate on specified rows in each verification as well as correctness for calculation of requested token. We will show that this sampling strategy on selected rows instead of all can greatly reduce the computational overhead on cloud server, while every maintaining detection of data corruption with very high probability. Suppose x server are try to misbehaving due to the possible compromise or Byzantine failureanalysis,we do not limit the value of nc, i.e., nc n. Assume the adversary modifies the data blocks in z rows out of the l rows in encoded file matrix blocks. Let r be number of different rows for which the user asks for check in challenge. Let X be discrete random different i.e. defined to be the number of rows chosen by the user that matches the rows modified by the adversary. 5. CONCLUSION In this paper, we investigated the problem of data security in the cloud data storage which is essentially a distributing storage system at cloud. To confirm correctness of users data in cloud data storage we were proposed and flexible distributed and effective scheme with explicit dynamic data support, including all factor such as block update, append and delete,. We rely on rubbing out correcting code in the file distribution preparation to provide redundancy parity vectors and guarantee the data dependance. By utilizing token with distributed verification of rubbing out coded data which is our scheme achieves integration of storage correctness insurance and data error localization i.e. whenever it may be data corruption has been detected during the storage correctness verification across different servers we can be almost guarantee one by one identification of misbehaving server.through out the detailed performance and security analysis, we show that our scheme is very highly efficient and resilient to Byzantine failure as well as data modification attack and even every server coluding attacks. We strongly believes that data storage security in Cloud Computing, area full of challenges and of very much importance is still in its many research and infancy now problems are yet to be identified. We gives several but most possibility directions for future research on this area. The most promising one of the we believe is model in which public verifiability is enforced. Public verifiability, supported in [6] [4], allows TPA to audit the cloud data storage without demanding user time or resources. Interesting question in this model is if we can construct a scheme to achieve both public verifiability and storage correctness assurance of dynamic data. Besides, along with our briefly research on almost dynamic cloud data storages. We has been also
plan to investigate problem of fine-grained data error localization. REFERENCES [1] Amazon.com, Amazon Web Services (AWS), Online at http://aws. amazon.com, 2008. [2] N.gohring, Amazon ss3down for seve ral hours, on line at http://www.pcworld.com/businesscenter/article/142 549/amazons s3 down for several hours.html, 2008. [3] A. Juels and J. Burton S. Kaliski, PORs: Proofs of Retrievability for Large Files, Proc. of CCS 07, pp. 584 597, 2007. [4] H. Shacham and B. Waters, Compact Proofs of Retrievability, Proc.ofAsiacrypt 08, Dec. 2008. [5] K. D. Bowers, A. Juels, and A. Oprea, Proofs of Retrievability: Theory and Implementation, Cryptology eprint Archive, Report 2008/175,2008, http://eprint.iacr.org/. [6] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z.Peterson, and D. Song, Provable Data Possession at Untrusted Stores, Proc. Of CCS 07, pp. 598 609, 2007. [7] G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, Scalable and Efficient Provable Data Possession, Proc. of SecureComm 08, pp. 1 10, 2008. [8] T. S. J. Schwarz and E. L. Miller, Store, Forget, and Check: Using Algebraic Signatures to Check Remotely Administered Storage, Proc.of ICDCS 06, pp. 12 12, 2006. [9] M. Lillibridge, S. Elnikety, A. Birrell, M. Burrows, and M. Isard, A Cooperative Internet Backup Scheme, Proc. of the 2003 USENIX Annual Technical Conference (General Track), pp. 29 41, 2003. [10] K. D. Bowers, A. Juels, and A. Oprea, HAIL: A High- Availability and Integrity Layer for Cloud Storage, Cryptology eprint Archive, Report 2008/489, 2008, http://eprint.iacr.org/. [11] L. Carter and M. Wegman, Universal Hash Functions, Journal of Computer and System Sciences, vol. 18, no. 2, pp. 143 154, 1979. [12] J. Hendricks, G. Ganger, and M. Reiter, Verifying Distributed Erasurecoded Data,